R
RunPod2w ago
Nafi

0 GPU pod makes no sense

I have network storage attached to my pods. I don't care if a GPU gets taken from me, but it's very inconvenient that I have to spinup a completely new pod when it does. I am automating runpod via the CLI, and at the moment I dont see any way to deploy a fresh instance and GET the ssh endpoint. I think just slapping on a warning saying you have to start fresh when a GPU gets taken and finding the next available one makes so much more sense, especially when using network storage.
37 Replies
digigoblin
digigoblin2w ago
0 GPU is not a thing when you use network volumes, that only happens when you don't use a network volume
Nafi
Nafi2w ago
It's happened several times to me, with a network volume attached I assure you this because the data is persistent
nerdylive
nerdylive2w ago
when you use network storage, you can't stop instances so that means it goes back to the pool and you will just select an available ones in the same datacenter when you want to start another instance. Wait what do you actually need again?
Nafi
Nafi2w ago
essentially I want to be able to start an exited pod, but I dont care if the GPU returns to the pool I am happy to use the next available one. The issue is that every so often the instance can only have 0 GPU's so I have to redeploy a completely new pod on the network storage. I cannot do this via the CLI so I have to do it manually, which defeats the purpose of the automation. Here's a sample response when trying to start the exited pod via the CLI:
Error: There are not enough free GPUs on the host machine to start this pod.
Error: There are not enough free GPUs on the host machine to start this pod.
Next time it happens I can send a screenshot of the runpod UI
nerdylive
nerdylive2w ago
You can't stop pods with network storage can you? how do you stop pods?
Nafi
Nafi2w ago
runpodctl stop pod <id>
nerdylive
nerdylive2w ago
Yeah just terminate it instead? and create a new one
Nafi
Nafi2w ago
can be done via CLI?
nerdylive
nerdylive2w ago
yes
Nafi
Nafi2w ago
and create new one via CLI?
nerdylive
nerdylive2w ago
yes
Nafi
Nafi2w ago
yes I see now. I will try thanks
nerdylive
nerdylive2w ago
np
Nafi
Nafi2w ago
what am I missing here?
runpodctl create pod --networkVolumeId "fpomddpaq0" --gpuType "RTX A6000" --templateId "8wwnezvz5k" --imageName "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04" --cost 1.00
runpodctl create pod --networkVolumeId "fpomddpaq0" --gpuType "RTX A6000" --templateId "8wwnezvz5k" --imageName "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04" --cost 1.00
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
nerdylive
nerdylive2w ago
There's no instance with your filters then change cost, gpu type
Nafi
Nafi2w ago
tried removing cost, and setting GPU type to "L40" i copied the IDs directly I will double check yeah nothing, Im unsure why the imageName has to be specified if you can specify the template?
nerdylive
nerdylive2w ago
Im not sure hahah is it actually required?
Nafi
Nafi2w ago
yes Error: required flag(s) "imageName" not set
nerdylive
nerdylive2w ago
oooh maybe if you want to ask that try to contact support from the website's contact page or put a feedback #🧐|feedback if you wish to request remove it
Madiator2011
Madiator20112w ago
imageName your docker image name
nerdylive
nerdylive2w ago
I mean there's templates ( which contains the image tag too )
Nafi
Nafi3d ago
? runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04 ^^^^^ This doesn't happen though. The GPU I use is never unavailable, it's just like it gets taken from me and theres no option to fetch another from the pool. I raised an issue about the imageName and they are handling it internally, but that isn't even necessary for me, as long as I can pool a new GPU rather than having 0 GPUs on my pod, even with a network storage attached.
nerdylive
nerdylive3d ago
What do you mean? did you tried to stop pods and it works? and you can't create another pod?
Nafi
Nafi3d ago
No description
Nafi
Nafi3d ago
this would say 0xL40
No description
Nafi
Nafi3d ago
even though I have a network volume attached to it stopping is fine, its starting it
nerdylive
nerdylive3d ago
hey *
Nafi
Nafi3d ago
Tried that
nerdylive
nerdylive3d ago
then?
Nafi
Nafi2d ago
Do you have an exact command that work for you (CLI) for create pod I tried the issue occured with the imageName they raised it internally Sample:
runpodctl create pod --secureCloud --gpuType 'L40' --imageName 'runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04' --networkVolumeId 'fpomddpaq0' --ports '8888/http,22/tcp' --templateId '8wwnezvz5k'
runpodctl create pod --secureCloud --gpuType 'L40' --imageName 'runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04' --networkVolumeId 'fpomddpaq0' --ports '8888/http,22/tcp' --templateId '8wwnezvz5k'
Output:
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
**
nerdylive
nerdylive2d ago
yeah just try creating it from the UI, or graphql for now. they're fixing that
Nafi
Nafi2d ago
graphql would be fantastic, is there documentation? found it graphql didnt fix the problem :/ For this input:
{
"input": {
"cloudType": "ALL",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"volumeInGb": 40,
"containerDiskInGb": 40,
"minVcpuCount": 2,
"minMemoryInGb": 15,
"name": "RunPod Test Pod",
"imageName": "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04",
"dockerArgs": "",
"ports": "8888/http,22/tcp",
"volumeMountPath": "/workspace",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
"networkVolumeId": "fpomddpaq0",
}
}
{
"input": {
"cloudType": "ALL",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"volumeInGb": 40,
"containerDiskInGb": 40,
"minVcpuCount": 2,
"minMemoryInGb": 15,
"name": "RunPod Test Pod",
"imageName": "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04",
"dockerArgs": "",
"ports": "8888/http,22/tcp",
"volumeMountPath": "/workspace",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
"networkVolumeId": "fpomddpaq0",
}
}
Output:
Deployment Response: {'errors': [{'message': 'Something went wrong. Please try again later or contact support.', 'locations': [{'line': 12, 'column': 5}], 'path': ['podFindAndDeployOnDemand', 'gpus'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}], 'data': {'podFindAndDeployOnDemand': None}}
Deployment Response: {'errors': [{'message': 'Something went wrong. Please try again later or contact support.', 'locations': [{'line': 12, 'column': 5}], 'path': ['podFindAndDeployOnDemand', 'gpus'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}], 'data': {'podFindAndDeployOnDemand': None}}
For this input:
{
"input": {
"cloudType": "SECURE",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"cloudType": "SECURE",
"networkVolumeId": "fpomddpaq0",
"ports": "8888/http,22/tcp",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
}
}
{
"input": {
"cloudType": "SECURE",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"cloudType": "SECURE",
"networkVolumeId": "fpomddpaq0",
"ports": "8888/http,22/tcp",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
}
}
Output:
Deployment Response: {'errors': [{'message': 'There are no longer any instances available with enough disk space.', 'path': ['podFindAndDeployOnDemand'], 'extensions': {'code': 'RUNPOD'}}], 'data': {'podFindAndDeployOnDemand': None}}
Deployment Response: {'errors': [{'message': 'There are no longer any instances available with enough disk space.', 'path': ['podFindAndDeployOnDemand'], 'extensions': {'code': 'RUNPOD'}}], 'data': {'podFindAndDeployOnDemand': None}}
Confirmed an internal problem then
digigoblin
digigoblin2d ago
Its not an internal problem, its a problem with your request. You can't specify a networkVolumeId without a data center id.
℠
3h ago
The UX generally leaves something to be desired when it comes to provisioning and terminating resources for whichever reason The fact that they don't the pod in a terminated state when they shut it down is really frustrating, as it leaves you no recourse when they decide to kill an instance you are using, because you balance is nearing $0. No warning, no idle state, no buffer at all
digigoblin
digigoblin3h ago
Not sure what you're referring to but sounds like a different issue to this thread
℠
3h ago
It's super issue to this
nerdylive
nerdylive3h ago
They don't the pod what does it means