R
RunPodā€¢6mo ago
Nafi

0 GPU pod makes no sense

I have network storage attached to my pods. I don't care if a GPU gets taken from me, but it's very inconvenient that I have to spinup a completely new pod when it does. I am automating runpod via the CLI, and at the moment I dont see any way to deploy a fresh instance and GET the ssh endpoint. I think just slapping on a warning saying you have to start fresh when a GPU gets taken and finding the next available one makes so much more sense, especially when using network storage.
43 Replies
digigoblin
digigoblinā€¢6mo ago
0 GPU is not a thing when you use network volumes, that only happens when you don't use a network volume
Nafi
NafiOPā€¢6mo ago
It's happened several times to me, with a network volume attached I assure you this because the data is persistent
nerdylive
nerdyliveā€¢6mo ago
when you use network storage, you can't stop instances so that means it goes back to the pool and you will just select an available ones in the same datacenter when you want to start another instance. Wait what do you actually need again?
Nafi
NafiOPā€¢6mo ago
essentially I want to be able to start an exited pod, but I dont care if the GPU returns to the pool I am happy to use the next available one. The issue is that every so often the instance can only have 0 GPU's so I have to redeploy a completely new pod on the network storage. I cannot do this via the CLI so I have to do it manually, which defeats the purpose of the automation. Here's a sample response when trying to start the exited pod via the CLI:
Error: There are not enough free GPUs on the host machine to start this pod.
Error: There are not enough free GPUs on the host machine to start this pod.
Next time it happens I can send a screenshot of the runpod UI
nerdylive
nerdyliveā€¢6mo ago
You can't stop pods with network storage can you? how do you stop pods?
Nafi
NafiOPā€¢6mo ago
runpodctl stop pod <id>
nerdylive
nerdyliveā€¢6mo ago
Yeah just terminate it instead? and create a new one
Nafi
NafiOPā€¢6mo ago
can be done via CLI?
nerdylive
nerdyliveā€¢6mo ago
yes
Nafi
NafiOPā€¢6mo ago
and create new one via CLI?
nerdylive
nerdyliveā€¢6mo ago
yes
Nafi
NafiOPā€¢6mo ago
yes I see now. I will try thanks
nerdylive
nerdyliveā€¢6mo ago
np
Nafi
NafiOPā€¢6mo ago
what am I missing here?
runpodctl create pod --networkVolumeId "fpomddpaq0" --gpuType "RTX A6000" --templateId "8wwnezvz5k" --imageName "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04" --cost 1.00
runpodctl create pod --networkVolumeId "fpomddpaq0" --gpuType "RTX A6000" --templateId "8wwnezvz5k" --imageName "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04" --cost 1.00
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
nerdylive
nerdyliveā€¢6mo ago
There's no instance with your filters then change cost, gpu type
Nafi
NafiOPā€¢6mo ago
tried removing cost, and setting GPU type to "L40" i copied the IDs directly I will double check yeah nothing, Im unsure why the imageName has to be specified if you can specify the template?
nerdylive
nerdyliveā€¢6mo ago
Im not sure hahah is it actually required?
Nafi
NafiOPā€¢6mo ago
yes Error: required flag(s) "imageName" not set
nerdylive
nerdyliveā€¢6mo ago
oooh maybe if you want to ask that try to contact support from the website's contact page or put a feedback #šŸ§ļ½œfeedback if you wish to request remove it
Madiator2011
Madiator2011ā€¢6mo ago
imageName your docker image name
nerdylive
nerdyliveā€¢6mo ago
I mean there's templates ( which contains the image tag too )
Nafi
NafiOPā€¢5mo ago
? runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04 ^^^^^ This doesn't happen though. The GPU I use is never unavailable, it's just like it gets taken from me and theres no option to fetch another from the pool. I raised an issue about the imageName and they are handling it internally, but that isn't even necessary for me, as long as I can pool a new GPU rather than having 0 GPUs on my pod, even with a network storage attached.
nerdylive
nerdyliveā€¢5mo ago
What do you mean? did you tried to stop pods and it works? and you can't create another pod?
Nafi
NafiOPā€¢5mo ago
No description
Nafi
NafiOPā€¢5mo ago
this would say 0xL40
No description
Nafi
NafiOPā€¢5mo ago
even though I have a network volume attached to it stopping is fine, its starting it
nerdylive
nerdyliveā€¢5mo ago
hey *
Nafi
NafiOPā€¢5mo ago
Tried that
nerdylive
nerdyliveā€¢5mo ago
then?
Nafi
NafiOPā€¢5mo ago
Do you have an exact command that work for you (CLI) for create pod I tried the issue occured with the imageName they raised it internally Sample:
runpodctl create pod --secureCloud --gpuType 'L40' --imageName 'runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04' --networkVolumeId 'fpomddpaq0' --ports '8888/http,22/tcp' --templateId '8wwnezvz5k'
runpodctl create pod --secureCloud --gpuType 'L40' --imageName 'runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04' --networkVolumeId 'fpomddpaq0' --ports '8888/http,22/tcp' --templateId '8wwnezvz5k'
Output:
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
**
nerdylive
nerdyliveā€¢5mo ago
yeah just try creating it from the UI, or graphql for now. they're fixing that
Nafi
NafiOPā€¢5mo ago
graphql would be fantastic, is there documentation? found it graphql didnt fix the problem :/ For this input:
{
"input": {
"cloudType": "ALL",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"volumeInGb": 40,
"containerDiskInGb": 40,
"minVcpuCount": 2,
"minMemoryInGb": 15,
"name": "RunPod Test Pod",
"imageName": "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04",
"dockerArgs": "",
"ports": "8888/http,22/tcp",
"volumeMountPath": "/workspace",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
"networkVolumeId": "fpomddpaq0",
}
}
{
"input": {
"cloudType": "ALL",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"volumeInGb": 40,
"containerDiskInGb": 40,
"minVcpuCount": 2,
"minMemoryInGb": 15,
"name": "RunPod Test Pod",
"imageName": "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04",
"dockerArgs": "",
"ports": "8888/http,22/tcp",
"volumeMountPath": "/workspace",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
"networkVolumeId": "fpomddpaq0",
}
}
Output:
Deployment Response: {'errors': [{'message': 'Something went wrong. Please try again later or contact support.', 'locations': [{'line': 12, 'column': 5}], 'path': ['podFindAndDeployOnDemand', 'gpus'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}], 'data': {'podFindAndDeployOnDemand': None}}
Deployment Response: {'errors': [{'message': 'Something went wrong. Please try again later or contact support.', 'locations': [{'line': 12, 'column': 5}], 'path': ['podFindAndDeployOnDemand', 'gpus'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}], 'data': {'podFindAndDeployOnDemand': None}}
For this input:
{
"input": {
"cloudType": "SECURE",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"cloudType": "SECURE",
"networkVolumeId": "fpomddpaq0",
"ports": "8888/http,22/tcp",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
}
}
{
"input": {
"cloudType": "SECURE",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"cloudType": "SECURE",
"networkVolumeId": "fpomddpaq0",
"ports": "8888/http,22/tcp",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
}
}
Output:
Deployment Response: {'errors': [{'message': 'There are no longer any instances available with enough disk space.', 'path': ['podFindAndDeployOnDemand'], 'extensions': {'code': 'RUNPOD'}}], 'data': {'podFindAndDeployOnDemand': None}}
Deployment Response: {'errors': [{'message': 'There are no longer any instances available with enough disk space.', 'path': ['podFindAndDeployOnDemand'], 'extensions': {'code': 'RUNPOD'}}], 'data': {'podFindAndDeployOnDemand': None}}
Confirmed an internal problem then
digigoblin
digigoblinā€¢5mo ago
Its not an internal problem, its a problem with your request. You can't specify a networkVolumeId without a data center id.
ā„ 
ā„ ā€¢5mo ago
The UX generally leaves something to be desired when it comes to provisioning and terminating resources for whichever reason The fact that they don't the pod in a terminated state when they shut it down is really frustrating, as it leaves you no recourse when they decide to kill an instance you are using, because you balance is nearing $0. No warning, no idle state, no buffer at all
digigoblin
digigoblinā€¢5mo ago
Not sure what you're referring to but sounds like a different issue to this thread
ā„ 
ā„ ā€¢5mo ago
It's super issue to this
nerdylive
nerdyliveā€¢5mo ago
"They don't the pod" what does it means
ā„ 
ā„ ā€¢5mo ago
They? What about me? Ive had my pods vanish with the blink of an eye. While I was working on it.
nerdylive
nerdyliveā€¢5mo ago
No description
nerdylive
nerdyliveā€¢5mo ago
Did you have 0 balance? there is auto top up feature in billing
ā„ 
ā„ ā€¢5mo ago
Youre missing the point, but Im too far away to hand it to you Thereā€™s almost always a way to work around UX issues But those are judt work-arounds, not solutions.
nerdylive
nerdyliveā€¢5mo ago
They have "buffer" on the pods, its called signals on Linux if I'm not wrong I'm not sure what's your point, mind explaining?
Nafi
NafiOPā€¢5mo ago
This is similar but also completely different I would open a separate thread and describe your issue more in depth The way to implement what you desire would be some sort of network storage system caching /frozen system state or something Theres probably a reason why they havenā€™t done it yet - too complicated/not possible with the infrastructure The original issue I raised in this thread can only be avoided by creating and deleting pods on demand via graphql
Want results from more Discord servers?
Add your server