Pipeline is not using gpu on serverless
Hi!
I 'm running bart-large-mnli on serverless but as I can see from the worker stats it's not using the gpu, do you know what I'm doing wrong?
The image is my current handler.py
And as docker base I'm using "FROM runpod/base:0.6.2-cuda12.2.0", also tried with "runpod/pytorch:2.2.1-py3.10-cuda12.1.1-devel-ubuntu22.04" but still 0% usage of gpu.
Let me know if you need more details!
Thank you π
57 Replies
How are you running the model?
this is the docker, I'm building + push on my docker and running it from a 24gb gpu on serverless
and this is the model downloader
I have a feeling this line:
Is doing something funky.
You should try doing a print right after that:
And see if your code thinks it is running on a CPU.
thank you! I'll try it immediately and let you know
@PatrickR this is the output
I can give you the full repo if you need π
Yep, will be useful for us to help you test it
That would be useful yes! Would love to test out and see what is going on.
here it is! thank you so much for your help
Risky click π
It's Just a zip right? π
if you'd prefer I can give you single files
this is the folder structure
Hmm can you try like some codes to move the hf model that you use to the Cuda gpus
Try searching for codes like that
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") model.to(device)
its already doing that
Oh
How long does your process take?
In serverless
with 5 concurrent requests ~5s per request
If you try your pipeline on cpu does it have the same performance?
let me try again cause I don't remember π
I'll launch the 32vcpu and let you know!
Sorry not quite following the thread from the start... But how did you know it wasn't using the gpu again?
Right sure
sure no problem, I see 100% CPU usage and 0% for the GPU
Oh.. Because, sometimes I think the usage on the ui isn't that updated especially if your job only took couple of secs
thanks for the tip, but I'm performing stress tests sending constantly requests for 1 minutes on it to understand how many requests it can handle so it's always running
Ic
another strange thing is that on a cheap cpu on hugging face inference endpoint it performs faster than on a 24gb gpu on runpod (that's also why I think that is not using it) π
always ~5 seconds with 5 concurrent requests on a 32 vcpu
Wow...
In gpu it takes more?
Hahahah if you got your code right, and you think it's a gpu problem feel free to report it in the site's contact button on the left menu thrn
Btw @BadNoise have you tried this
export CUDA_VISIBLE_DEVICES=0
@nerdylive tried now, still 100% CPU usage and 0% for the GPU π¦
I might look at it
thank you π
Hey, so I went through this and I've this input:
and this output:
Here is my python code:
`
Did it work? used gpu?
So I am getting the GPU to run through CUDA.
Yes, output of the device is GPU.
BTW I used the CLI tool
runpodctl project create
for faster itteration cycles/not having to rebuild docker constantly.Hmm okay cool, whats the difference with badnoise's code?
I rebuilt the new Docker image based off another image:
I think he trying to use the cache_model.py to cache the model locally when building the docker image. He set local_files_only=True, just to make sure it never download from internet.
yeah whats wrong with that?
i don't feel anything wrong with thatπ , I am still wondering what Patrick changed make it works to start using the GPU.
ahh i thought you found it already hahah
Sorry, my code was a little bit of a redherring. Here is a screenshot of it running on GPU though.
I guess it can be a dependency issue ( torch ) thats causing it not to use the gpu
hi! thank you so much for your help, I will try with the suggested docker image π
I think this might be the root cause, in your requirements.txt, you have to set:
torch==2.2.1
Make sure to install cuda version not cpu
I'll try setting manually the torch version, because it's strange that I still see 0% of the GPU usage
so I have to remove torch and use pytorch and pytorch-cuda=12.1 right?
Assming your base image is CUDA 12.1
that's crazy, always 0% π©
Its using GPU if the GPU memory is showing as used
That telemetry is not real time and not reliable
but it's strange that even if I run stress test on it for over 1 minute it's never used π
check nvidia-smi
I added some logs in the code and it is using the GPU.
Yep, the GPU utilization telemetry always confuses people because its not real-time
this one is interesting, lol
π
too confused hahah