Nafi
Nafi
Explore posts from servers
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
The original issue I raised in this thread can only be avoided by creating and deleting pods on demand via graphql
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
Theres probably a reason why they haven’t done it yet - too complicated/not possible with the infrastructure
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
/frozen system state or something
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
The way to implement what you desire would be some sort of network storage system caching
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
I would open a separate thread and describe your issue more in depth
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
This is similar but also completely different
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
Confirmed an internal problem then
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
For this input:
{
"input": {
"cloudType": "ALL",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"volumeInGb": 40,
"containerDiskInGb": 40,
"minVcpuCount": 2,
"minMemoryInGb": 15,
"name": "RunPod Test Pod",
"imageName": "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04",
"dockerArgs": "",
"ports": "8888/http,22/tcp",
"volumeMountPath": "/workspace",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
"networkVolumeId": "fpomddpaq0",
}
}
{
"input": {
"cloudType": "ALL",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"volumeInGb": 40,
"containerDiskInGb": 40,
"minVcpuCount": 2,
"minMemoryInGb": 15,
"name": "RunPod Test Pod",
"imageName": "runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04",
"dockerArgs": "",
"ports": "8888/http,22/tcp",
"volumeMountPath": "/workspace",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
"networkVolumeId": "fpomddpaq0",
}
}
Output:
Deployment Response: {'errors': [{'message': 'Something went wrong. Please try again later or contact support.', 'locations': [{'line': 12, 'column': 5}], 'path': ['podFindAndDeployOnDemand', 'gpus'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}], 'data': {'podFindAndDeployOnDemand': None}}
Deployment Response: {'errors': [{'message': 'Something went wrong. Please try again later or contact support.', 'locations': [{'line': 12, 'column': 5}], 'path': ['podFindAndDeployOnDemand', 'gpus'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}], 'data': {'podFindAndDeployOnDemand': None}}
For this input:
{
"input": {
"cloudType": "SECURE",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"cloudType": "SECURE",
"networkVolumeId": "fpomddpaq0",
"ports": "8888/http,22/tcp",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
}
}
{
"input": {
"cloudType": "SECURE",
"gpuCount": 1,
"gpuTypeId": "NVIDIA L40",
"cloudType": "SECURE",
"networkVolumeId": "fpomddpaq0",
"ports": "8888/http,22/tcp",
"startJupyter": False,
"startSsh": True,
"supportPublicIp": True,
"templateId": "8wwnezvz5k",
}
}
Output:
Deployment Response: {'errors': [{'message': 'There are no longer any instances available with enough disk space.', 'path': ['podFindAndDeployOnDemand'], 'extensions': {'code': 'RUNPOD'}}], 'data': {'podFindAndDeployOnDemand': None}}
Deployment Response: {'errors': [{'message': 'There are no longer any instances available with enough disk space.', 'path': ['podFindAndDeployOnDemand'], 'extensions': {'code': 'RUNPOD'}}], 'data': {'podFindAndDeployOnDemand': None}}
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
graphql didnt fix the problem :/
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
found it
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
graphql would be fantastic, is there documentation?
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
**
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
Output:
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
Error: There are no longer any instances available with the requested specifications. Please refresh and try again.
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
Sample:
runpodctl create pod --secureCloud --gpuType 'L40' --imageName 'runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04' --networkVolumeId 'fpomddpaq0' --ports '8888/http,22/tcp' --templateId '8wwnezvz5k'
runpodctl create pod --secureCloud --gpuType 'L40' --imageName 'runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04' --networkVolumeId 'fpomddpaq0' --ports '8888/http,22/tcp' --templateId '8wwnezvz5k'
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
they raised it internally
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
I tried the issue occured with the imageName
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
Do you have an exact command that work for you (CLI) for create pod
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
Tried that
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
stopping is fine, its starting it
87 replies
RRunPod
Created by Nafi on 6/23/2024 in #⛅|pods
0 GPU pod makes no sense
even though I have a network volume attached to it
87 replies