Multi GPU problem
Hi, how can I evenly distribute workers across multiple GPU? I am trying to get the Stable Diffusion model up, however I am getting an out of memory error as gunicorn is trying to run them on one GPU. How can I solve this problem, given that I need to run all the workers on the same port. Either how can I configure proxying requests inside the pod.
1 Reply
I guess its application specific, which library or tools or apps are you using