Pod crashing due to low regular RAM?
Hey, I am running ComfyUI and my pod keeps crashing at one point in the workflow, the VRam is only at 70% utilised, but the GPU says 100%
Does this mean if I found a different pod with more regular Ram, then I could keep going with the workflow?
26 Replies
pod has 30gb ram
I don't get what are you implying eith those two values
And I think gpu vram, gpu usg for different use than ram
So I have a friends 4090 in real life that I am using to render artwork from a workflow in comfyui. i Need to work faster so I rented a 4090 on Runpod, but it doesnt work. it crashes... and I am trying to figure out why
the screenshot I shared is a plug in that visualises how the ram is being used, and it seems like the VRAM is not the issue
So I want to know how to fix it, if i had a different pod with more than 30gb RAM maybe would be ok
It is just annoying to waste money testing out these pods, thought someone here might know
Oh crashing ?
Can you check the comfyui logs
Maybe something happened
thats the thing, it doesnt say why in the logs
Pod restarts automatically or what , ram 42 looks pretty normal and shouldn't be crashing itself
I mean I never happen to crash comfyui
What does it looks like when it's crashed
Like pod restarts?
i shared the screenshot..
it says "reconnecting" and ERR
but it required a pod restart to get it working again
log didnt say anything, the other stuff was just error because I put wrong format image fir ipadapter
Hmm
Oh in the comfyui?
What about the pod
Still stays on?
yes
I got it working with using less controlnets, but whenever I use 3 it crashes
Hmm maybe it's the vram
Try to launch it from your commandline
Or jupyter console
Launch your own comfyui
oh now issue with just 2 controlnet
Scroll down
Either: wrong model, model compability, or bad pod
But the crashing I don't have any idea
ugh hmm, how can I launch it from commandline?
I got it to render a couple of times so I dont think its actually a bad pod
but it just keeps crashing, I am 90% sure its a RAM issue
regular ram, not VRAM
But idk really... the workflow is exactly the same as what I am running on a local machine
It works now, with 1 Ipadapter, 2 controlnet, but I need 2 Ipadapter, 3 controlnets. that is the issue. I guess it can't handle it
Nvm I think it's the same
Hmm yeah try 2x gpu or pod with more ram then?
What about your local it has more ram?
I actually don't know, need to ask my friend, I am accessing it remotely
i would guess 64GB
if its working
thanks for your help anyway
Ahh ic
this seems like memory issue?
How so? pls explain?
I was searching and found this😂 :
The error message indicates an "Allocation on device" issue when executing SamplerCustom in ComfyUI. This type of error typically occurs when there's not enough GPU memory available to allocate for the operation.
perhaps it is, try bigger vram then, Jas