Is Runpod's Faster Whisper Set Up Correctly for CPU/GPU Use?
Hi, I'm currently using Faster Whisper provided by Runpod.
https://github.com/runpod-workers/worker-faster_whisper
While reviewing the code, I found something confusing:
https://github.com/runpod-workers/worker-faster_whisper/blob/main/builder/fetch_models.py#L15
Is there a specific reason for using "cpu" instead of "gpu"?
Thanks!
GitHub
GitHub - runpod-workers/worker-faster_whisper: 🎧 | RunPod worker of...
🎧 | RunPod worker of the faster-whisper model for Serverless Endpoint. - runpod-workers/worker-faster_whisper
GitHub
worker-faster_whisper/builder/fetch_models.py at main · runpod-work...
🎧 | RunPod worker of the faster-whisper model for Serverless Endpoint. - runpod-workers/worker-faster_whisper
1 Reply
i guess this is the relevant answer, duplicate question from general:
https://discord.com/channels/912829806415085598/948767517332107274/1302180007174737951
it is for downloading the model