Serverless GPUs unavailable
The Serverless GPU that i'm using are always unavailable. Is ther any plans to make them more available in the near future or is there any other solution ?
11 Replies
Are you using the network volume? That could limit you to single data center and usually has limited gpus
I am indeed using a network volume. So if i'm using a network volume i'm locked in to only one data center and theres no way around it ?
Each of our data centers has specific GPUs available. If you have a preference for a particular type of GPU, I can recommend data centers. However, the best approach is to bake your network volume content directly into your Docker image. This way, you can deploy globally without worrying about GPU availability in specific data centers.
I am trying to build the image using the runpod github feature. It builds successfully but i get this error after the build
This is the image i'm using
https://github.com/A-BMT02/runpod-worker-comfy
GitHub
GitHub - A-BMT02/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to A-BMT02/runpod-worker-comfy development by creating an account on GitHub.
check the worker logs, do they show any errors?
These are the worker logs @flash-singh
Build final logs
click on the worker and it will show you logs as it runs