Running fine-tuned faster-whisper model
Hello. Is it possible to run a fine-tuned faster-whisper model using RunPod's faster-whisper endpoint?
Furthermore, does it work on a scale of hundreds of users using it at the same time?
5 Replies
worker is open source so you can edit worker with own model
https://github.com/runpod-workers/worker-faster_whisper
edit build docker image, push to dockerhub, deploy own endpoint
GitHub
GitHub - runpod-workers/worker-faster_whisper: 🎧 | RunPod worker of...
🎧 | RunPod worker of the faster-whisper model for Serverless Endpoint. - runpod-workers/worker-faster_whisper
Interesting! Thank you.
It's possible to fine tune but not using that endpoint I think. You have to build custom Docker images that fine tunes from the inputs
My model is already fine-tuned so I'd just need to load it for inference; from what I see I can just edit the worker above to use the fine-tuned model instead of a "default" model.