R
RunPod•7mo ago
r1

serverless: any way to figure out what gpu type a job ran on?

trying to get data on speeds across gpu types for our jobs, and i'm wondering if the api exposes this anywhere, and if not, what the best way to sort it out would be.
16 Replies
Unknown User
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
flash-singh
flash-singh•7mo ago
dont need nvidia-smi, we expose an env variable to the worker with the gpu name in it, look at our docs for env variable names
nerdylive
nerdylive•7mo ago
is this what you're referring to?
No description
nerdylive
nerdylive•7mo ago
i dont think thats the one, after browsing for some mins i still cant be able to find it please send the docs url for that
Unknown User
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
nerdylive
nerdylive•7mo ago
what is the env variable key/name for it? yeah lets just wait for support im too lazy changing and rebuilding my image for that
Unknown User
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
nerdylive
nerdylive•7mo ago
ye lol
ashleyk
ashleyk•7mo ago
Those are GPU cloud environment variables, these are serverless ones:
RUNPOD_WEBHOOK_POST_STREAM=https://api.runpod.ai/v2/12345657890/job-stream/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_ENDPOINT_ID=mpoacd7wrmv2fc
RUNPOD_CPU_COUNT=6
RUNPOD_POD_ID=p8btjjjjq865pi
RUNPOD_GPU_SIZE=AMPERE_24
RUNPOD_MEM_GB=62
RUNPOD_GPU_COUNT=1
RUNPOD_VOLUME_ID=hbsp3mav9e
RUNPOD_POD_HOSTNAME=p8btjjjjq865pi-64410f26
RUNPOD_DEBUG_LEVEL=INFO
RUNPOD_ENDPOINT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_DC_ID=EU-RO-1
RUNPOD_AI_API_ID=mpoacd7wrmv2fc
RUNPOD_AI_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_WEBHOOK_GET_JOB=https://api.runpod.ai/v2/12345657890/job-take/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_PING=https://api.runpod.ai/v2/12345657890/ping/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_POST_OUTPUT=https://api.runpod.ai/v2/12345657890/job-done/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_PING_INTERVAL=4000
CUDA_VERSION=11.8.0
NV_CUDNN_VERSION=8.9.6.50
RUNPOD_WEBHOOK_POST_STREAM=https://api.runpod.ai/v2/12345657890/job-stream/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_ENDPOINT_ID=mpoacd7wrmv2fc
RUNPOD_CPU_COUNT=6
RUNPOD_POD_ID=p8btjjjjq865pi
RUNPOD_GPU_SIZE=AMPERE_24
RUNPOD_MEM_GB=62
RUNPOD_GPU_COUNT=1
RUNPOD_VOLUME_ID=hbsp3mav9e
RUNPOD_POD_HOSTNAME=p8btjjjjq865pi-64410f26
RUNPOD_DEBUG_LEVEL=INFO
RUNPOD_ENDPOINT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_DC_ID=EU-RO-1
RUNPOD_AI_API_ID=mpoacd7wrmv2fc
RUNPOD_AI_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_WEBHOOK_GET_JOB=https://api.runpod.ai/v2/12345657890/job-take/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_PING=https://api.runpod.ai/v2/12345657890/ping/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_POST_OUTPUT=https://api.runpod.ai/v2/12345657890/job-done/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_PING_INTERVAL=4000
CUDA_VERSION=11.8.0
NV_CUDNN_VERSION=8.9.6.50
Doesn't seem to have the GPU type set directly but you can get it from the end of RUNPOD_WEBHOOK_POST_STREAM, RUNPOD_WEBHOOK_GET_JOB, RUNPOD_WEBHOOK_PING, RUNPOD_WEBHOOK_POST_OUTPUT as shown above.
nerdylive
nerdylive•7mo ago
nice thanks eh isnt this the one RUNPOD_GPU_SIZE=AMPERE_24
ashleyk
ashleyk•7mo ago
Not sure how that translates to L4 though
nerdylive
nerdylive•7mo ago
Ah ye no full gpu type
flash-singh
flash-singh•7mo ago
looks like it isn't, i will plan to add it, this is good to expose in pod env
nerdylive
nerdylive•7mo ago
Wow you're a Rp developer
ashleyk
ashleyk•7mo ago
He isn't just a developer, he is the CTO 😉
nerdylive
nerdylive•7mo ago
Wow that's amazing he's still handling the supports lol
Want results from more Discord servers?
Add your server
More Posts
Is it possible to build an API for an automatic1111 extension to be used through Runpod serverless?I want to use the faceswaplab extension for automatic1111 as a serverless endpoint on Runpod. I manhosting mistral model in productionhi, I wish to host mistral model in runpod for production. what will happen to the app during scheduJobs suddenly queuing up: only 1 worker active, 9 jobs queued** Endpoint: vieo12phdoc8kh** Hi, are there any known issues at the moment with 4090s? Our processiIssues with building the new `worker-vllm` Docker ImageI've been using the previous version of `worker-vllm` with the `awq` model in production, and it recImportError: version conflict: '/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/psutil/_psI'm spinning up a new pod and copying from backblaze B2, it works just fine before the download but Jupyter runpod proxy extremely slowHello, since a few days im having massive issues with Jupyter running on runpod proxys. Its abysmallRunpod Running Slower Than Local MachineI conducted a benchmark test on stable diffusion image-to-image. My pipeline involves using ControlNHow to transfer outputs when GPU is not available?I ran into an issue with trying to transfer my outputs from a pod with 0 GPU's. I wasn't able to useCan I spin up a pod pre-loaded with my /workspace?I've been spinning up pods under a network volume but I'm not quite sure how to actually use it to mNew to RunPod but problemsI was excited to use the service mainly for the pre-set templates. And at first, the Stable Diffusio