A100 GPU vram being used
I have a pod running but one of my assigned GPUs has its vram taken up and I can't clear it even if after restarting the pod or torch.cuda.empty_cache

5 Replies
@Hello
Escalated To Zendesk
The thread has been escalated to Zendesk!
What is your pod id
Which template do you use? Custom or runpod templates or official templates
I have the same issue
same issue. i don't want to change pods. i have tons of data on here.
Open a ticket
report your pod