I have a pod running but one of my assigned GPUs has its vram taken up and I can't clear it even if after restarting the pod or torch.cuda.empty_cache