Hazem
RRunPod
•Created by Hazem on 12/26/2023 in #⚡|serverless
Network Volume and GPU availability.
I am deploying automatic1111 as an endpoint . hosting the models on a network volume and then accessing them from the endpoint seems a good choice to avoid having an endpoint per model .However it looks like whenever I use a network volume the GPU availability displays either "low availability" or "not available ". Is there a specific region with a highly available 3090 , A5000 or 4090 GPU's ?
4 replies
RRunPod
•Created by Hazem on 12/26/2023 in #⚡|serverless
Number of workers limit
I recently updated my number of workers in serverless to 10 and I see I can upgrade more depending on balance .My question is ,is there any limit to this ? I plan to deploy many models as endpoints (might reach 30-40 models in the future ) and would like to know if that would be supported on runpod
5 replies