Pod suddenly says "0x A100 80GB" and cuda not available
Hi, I created a pod a few days ago and worked with it, no problem. I stopped the pod after the session. Today I try again and suddenly it says 0x A100 80GB and the cuda is not available.
If I look at starting a new pod it seems the A100 80GB is available in the same location, so why can't I start my pod with this GPU?
What should I do? Is there a way to transfer the data to a new pod?
Thanks!
pod id: lzg7plta4rfu0n
pod image: runpod/pytorch:1.13.0-py3.10-cuda11.7.1-devel-ubuntu22.04
1 Reply
Solution