R
RunPod3d ago
Dobby

Fine Tuned Whisper V3 Large Turbo Configuration

Hi, I have a fine-tuned version of Whisper V3 Large Turbo on Hugging Face. I’ve successfully tested it on Google Colab, and everything works as expected. However, I’ve encountered some trouble with the deployment and am unsure of the easiest way to manage this process. I attempted to use the Docker image for the Fastest Whisper implementation but ran into issues when loading my model in the ct2 format. At this point, I’m fine with not using the faster version—I just want a straightforward way to deploy the model, either as a Pod or a serverless endpoint. Does anyone have suggestions or know of a clear step-by-step tutorial to achieve this? I couldn’t find any resources that explain the process clearly. 🤔
0 Replies
No replies yetBe the first to reply to this messageJoin

Did you find this page helpful?