R
RunPod2w ago
Yash

0% GPU utilization and 100% CPU utilization on Faster Whisper quick deploy endpoint

I used the "Quick Deploy" option to deploy a Faster Whisper custom endpoint (https://github.com/runpod-workers/worker-faster_whisper). Then, I called the endpoint to transcribe a 1 hour long podcast by using the following parameters:
{
'input': {
'audio': 'https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/traffic.megaphone.fm/ISOSO6446456065.mp3?updated=1715037715',
'model': 'large-v3',
'language': 'en',
}
}
{
'input': {
'audio': 'https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/traffic.megaphone.fm/ISOSO6446456065.mp3?updated=1715037715',
'model': 'large-v3',
'language': 'en',
}
}
The job completed in 201 seconds. I'm not sure if this is actually using the GPU and the graphs are wrong, or it's actually only using the CPU and it would have completed much faster had it been using the GPU.
No description
4 Replies
nerdylive
nerdylive2w ago
Try checking the code to make sure it does uses the gpu I think it does but to make sure just check the code, or launch a cpu instance with 20+ vcores and try
Yash
Yash2w ago
I am getting back "device": "cuda" in my output: https://github.com/runpod-workers/worker-faster_whisper/blob/main/src/predict.py#L120 Does that mean that it's actually using the GPU?
nerdylive
nerdylive2w ago
Yeah should be
Yash
Yash2w ago
Ok I think you're right, I tried it on a 32 cpu instance and got a bunch of nvidia-smi: not found logs plus it took longer than 200 seconds So I guess the graph is wrong then Thank you for your help!