R
RunPod7mo ago
SATAN

OutOfMemory

why my tasks keeps on failing with out of memory I'm just running large-v2 on faster-whisper on a 4090 GPU
19 Replies
nerdylive
nerdylive7mo ago
Maybe a bug? Which template are you using?
SATAN
SATANOP7mo ago
it was running so well the past few weeks this started happening
No description
nerdylive
nerdylive7mo ago
Maybe a new update? I'm still clueless about this maybe ask an issue on the github repo
SATAN
SATANOP7mo ago
I'm using my own docker image that's been running for 4 months now
nerdylive
nerdylive7mo ago
Ohh so nothing changed from code side?
SATAN
SATANOP7mo ago
the thing is when I start a new task the GPU mermory indicator shows a 98% usage to fix this I have to puit the max workers to 0 and wait then put them back up I'm not using FlashBoot this acts as if FlashBoot is ON
SATAN
SATANOP6mo ago
this is still going ..
No description
nerdylive
nerdylive6mo ago
Have you Created a support ticket ? Create on e I will the contact page
Thorsten
Thorsten6mo ago
Same issue here, trying to deploy llama-3-70B and other LLM, all are erroring out with OutOfMemory error. Even when using the highest GPU tier.
SATAN
SATANOP6mo ago
yes opened a ticket after contacting the support it seems like I was loading the model inside the handler function (it should be done outside the function)
nerdylive
nerdylive6mo ago
Oooh didt it work?
SATAN
SATANOP6mo ago
I updated my image and I'm gonna run it for a few days when there is high traffic that's when it happens
nerdylive
nerdylive6mo ago
nice
SATAN
SATANOP6mo ago
thanks !!
nerdylive
nerdylive6mo ago
alright yur welcome
Théo Champion
Théo Champion6mo ago
I'm running into the same issue, started getting OOM errors the past few weeks. No code change. I contacted support but got no reply yet
SATAN
SATANOP6mo ago
load your model before entering the handler function I think now FlashBoot(?) runs by default @Théo Champion
Théo Champion
Théo Champion6mo ago
I do load my models outside the handler function
nerdylive
nerdylive6mo ago
Yeah its activated by default now OOM? what template are you using? maybe the gpu isn't capable for your model, use a bigger vram
Want results from more Discord servers?
Add your server