R
RunPod7mo ago
harishp

Not all workers being utilized

In the attached image if you can see 11/12 workers spun up but only 7 are being utilized but we're being charged for all the 12 GPUs. @girishkd
No description
18 Replies
nerdylive
nerdylive7mo ago
what do you mean 7 are being utilized? that seems like 11 running
harishp
harishpOP7mo ago
If you see "Jobs" section, 7 in progress it shows. So, it is not utilizing all the GPUs to serve the requests. Only 7 are serving the requests
nerdylive
nerdylive7mo ago
Hm what app are you running there? maybe check the logs, each worker and see if anything stinks
harishp
harishpOP7mo ago
its just a SDXL model
girishkd
girishkd7mo ago
In some of the GPUs, CUDA failure was seen and those GPUs when we remove from the list of workers, they are not spinning up
nerdylive
nerdylive7mo ago
Oh any logs? are you with him? yeah then thats why probably it fails. try limiting the cuda versions from the settings
harishp
harishpOP7mo ago
we limited the cuda versions to 12.1 me and @girishkd are colleagues
nerdylive
nerdylive7mo ago
ohh i see, great so its good now by setting that?
harishp
harishpOP7mo ago
nope nope
nerdylive
nerdylive7mo ago
oh so what happened now?
digigoblin
digigoblin7mo ago
What kind of CUDA failure? Did it OOM for running out of VRAM? I've seen that happen on 24GB GPUs when you add upscaling.
girishkd
girishkd7mo ago
Attached screenshot contains the CUDA failure we are experiencing
No description
girishkd
girishkd7mo ago
We are using 24GB ones (4090s) only
digigoblin
digigoblin7mo ago
Oh yeah that error seems to be due to a broken worker.
girishkd
girishkd7mo ago
Okay. These broken workers are not getting respawned on its own. What should we do in that case ?
digigoblin
digigoblin7mo ago
Contact RunPod support via web chat or email
nerdylive
nerdylive7mo ago
Broken worker? Wow there's such thing
digigoblin
digigoblin7mo ago
Yeah it happens sometimes just like broken pods. I had to terminate workers a few times.
Want results from more Discord servers?
Add your server