R
RunPod2mo ago
SATAN

OutOfMemory

why my tasks keeps on failing with out of memory I'm just running large-v2 on faster-whisper on a 4090 GPU
19 Replies
nerdylive
nerdylive2mo ago
Maybe a bug? Which template are you using?
SATAN
SATAN2mo ago
it was running so well the past few weeks this started happening
No description
nerdylive
nerdylive2mo ago
Maybe a new update? I'm still clueless about this maybe ask an issue on the github repo
SATAN
SATAN2mo ago
I'm using my own docker image that's been running for 4 months now
nerdylive
nerdylive2mo ago
Ohh so nothing changed from code side?
SATAN
SATAN2mo ago
the thing is when I start a new task the GPU mermory indicator shows a 98% usage to fix this I have to puit the max workers to 0 and wait then put them back up I'm not using FlashBoot this acts as if FlashBoot is ON
SATAN
SATAN2mo ago
this is still going ..
No description
nerdylive
nerdylive2mo ago
Have you Created a support ticket ? Create on e I will the contact page
Thorsten
Thorsten2mo ago
Same issue here, trying to deploy llama-3-70B and other LLM, all are erroring out with OutOfMemory error. Even when using the highest GPU tier.
SATAN
SATAN2mo ago
yes opened a ticket after contacting the support it seems like I was loading the model inside the handler function (it should be done outside the function)
nerdylive
nerdylive2mo ago
Oooh didt it work?
SATAN
SATAN2mo ago
I updated my image and I'm gonna run it for a few days when there is high traffic that's when it happens
nerdylive
nerdylive2mo ago
nice
SATAN
SATAN2mo ago
thanks !!
nerdylive
nerdylive2mo ago
alright yur welcome
Théo Champion
Théo Champion2mo ago
I'm running into the same issue, started getting OOM errors the past few weeks. No code change. I contacted support but got no reply yet
SATAN
SATAN2mo ago
load your model before entering the handler function I think now FlashBoot(?) runs by default @Théo Champion
Théo Champion
Théo Champion2mo ago
I do load my models outside the handler function
nerdylive
nerdylive2mo ago
Yeah its activated by default now OOM? what template are you using? maybe the gpu isn't capable for your model, use a bigger vram
Want results from more Discord servers?
Add your server
More Posts