R
RunPod9mo ago
rcmalli

No GPU Available

Hello, I have using the 'runpodctl' workflow with dev and deploy commands and I have one network volume attached in EU-RO-1 zone. Currently whenever I run 'runpodctl project dev', CLI says that there are no GPUs available. Is it true or actually no GPU is left? Here is the part related inside my 'runpod.toml' file: base_image = "runpod/base:0.6.1-cuda12.2.0" gpu_types = [ "NVIDIA GeForce RTX 4080", # 16GB "NVIDIA RTX A4000", # 16GB "NVIDIA RTX A4500", # 20GB "NVIDIA RTX A5000", # 24GB "NVIDIA GeForce RTX 3090", # 24GB "NVIDIA GeForce RTX 4090", # 24GB "NVIDIA RTX A6000", # 48GB "NVIDIA A100 80GB PCIe", # 80GB ] gpu_count = 1 volume_mount_path = "/runpod-volume" ports = "4040/http, 7270/http, 22/tcp" # FileBrowser, FastAPI, SSH container_disk_size_gb = 100
3 Replies
ashleyk
ashleyk9mo ago
4090, A4500, and 4000 Ada are available, so not sure why it says none are available. I see 4000 Ada isn't in your list though.
rcmalli
rcmalliOP9mo ago
I have added more but it is same
No description
rcmalli
rcmalliOP9mo ago
I am using 'runpodctl' version 1.14.2. Is something changed related to gpu type definitions recently?
Want results from more Discord servers?
Add your server