R
RunPod8mo ago
ribbit

Knowing Which Machine The Endpoint Used

Hi, I configured my endpoint to opt for multiple types of gpu like the image attached. When I run a pod, can I know which type of GPU the pod is using? Thank you
No description
Solution:
I believe what @gpu poor wants to know is if it's possible to figure out which type of GPU a specific serverless job is using, especially when there are different types of GPUs set for the endpoint. To my understanding, currently, there isn't a way to programmatically find out the type of GPU a job is using. When you check the result through /status/{job_id} endpoint or webhook, there's no GPU information provided. Even the event parameter passed to the runpod handler function doesn't include GPU details. Right now, the only way to know the GPU type is by checking the Runpod's web UI and seeing which worker is handling the job....
Jump to solution
6 Replies
ashleyk
ashleyk8mo ago
Its probably not going to get any machines. 1. Don't select GPU tiers that have no availability. 2. Ones that have Low availability are also going to be problematic and you will have throttling issues. 3. If you are using network storage, you should select a different region with higher availability. 4. Click Advanced to see the GPU types.
ribbit
ribbitOP8mo ago
Hi! thank you for the answer, the picture was for an example only and I'm fully aware that I won't get machines if it shows "unavailable". What I want to know is if it's possible to know what type of GPU is used on a certain worker, for example my endpoint shows 24GB, but if I select 24GB and 24GB PRO, how would I know which it is using And is it possible to know what type of GPU it is using? for example 24GB got L4, A5000 and 3090. Is it possible to see which one the pod is using?
ashleyk
ashleyk8mo ago
I already told you, please open your eyes and read point 4.
Solution
n8tzto
n8tzto8mo ago
I believe what @gpu poor wants to know is if it's possible to figure out which type of GPU a specific serverless job is using, especially when there are different types of GPUs set for the endpoint. To my understanding, currently, there isn't a way to programmatically find out the type of GPU a job is using. When you check the result through /status/{job_id} endpoint or webhook, there's no GPU information provided. Even the event parameter passed to the runpod handler function doesn't include GPU details. Right now, the only way to know the GPU type is by checking the Runpod's web UI and seeing which worker is handling the job.
n8tzto
n8tzto8mo ago
I've thought of a way to get the GPU info in the handler function using Python code. You can use either the torch library or the GPUtil library to get the GPU info. References: - https://stackoverflow.com/questions/76581229/is-it-possible-to-check-if-gpu-is-available-without-using-deep-learning-packages - https://stackoverflow.com/questions/64776822/how-do-i-list-all-currently-available-gpus-with-pytorch
Stack Overflow
Is it possible to check if GPU is available without using deep lear...
I would like to check if there is access to GPUs by using any packages other than tensorflow or PyTorch. I have found the psutil.virtual_memory().percent function that returns the usage of the GPU.
Stack Overflow
How do I list all currently available GPUs with pytorch?
I know I can access the current GPU using torch.cuda.current_device(), but how can I get a list of all the currently available GPUs?
ribbit
ribbitOP8mo ago
hi, sorry for the late reply. @n8tzto Ah I see, thank you I might try them out, yes that's what I wanted to know. Sorry if I was not phrasing the question clear enough tho, thanks all @ashleyk @n8tzto
Want results from more Discord servers?
Add your server