Multi GPU problem

Hi, how can I evenly distribute workers across multiple GPU? I am trying to get the Stable Diffusion model up, however I am getting an out of memory error as gunicorn is trying to run them on one GPU. How can I solve this problem, given that I need to run all the workers on the same port. Either how can I configure proxying requests inside the pod.
No description
1 Reply
nerdylive
nerdylive14h ago
I guess its application specific, which library or tools or apps are you using
Want results from more Discord servers?
Add your server