Anyone here has tried using Mojo with local Whisper from OpenAI?

Hi! I'm trying to use Whisper from OpenAI locally, but python in general is pretty slow. Anyone has tried running any kind of Whisper from OpenAI locally?
5 Replies
Martin Dudek
Martin Dudek5mo ago
You might want to try https://github.com/ggerganov/whisper.cpp , runs very well on my Mac (it uses Metal) I thought of looking into porting it to Mojo but without GPU support it won't lead to anything impressive I am afraid...
GitHub
GitHub - ggerganov/whisper.cpp: Port of OpenAI's Whisper model in C...
Port of OpenAI's Whisper model in C/C++. Contribute to ggerganov/whisper.cpp development by creating an account on GitHub.
Martin Dudek
Martin Dudek5mo ago
Do you have a Nvidia GPU device or Apple Silicon? If not I am afraid you need to use the smaller models to have acceptable run times.
White Frost
White FrostOP5mo ago
I've a small Nvidia GPU (Geforce GTX 1650). When you say "without GPU support", you mean Mojo doesn't support GPU? (I'm novice at understanding the the hardware/computing relationships with machine learning models)
White Frost
White FrostOP5mo ago
@Martin Dudek it seems I should be using MAX instead of MOJO?
No description
Martin Dudek
Martin Dudek5mo ago
Mojo right now does not compile to code which runs on the GPU. It runs on the CPU. MAX is about to get GPU support, so it would be the way to go. But I haven't looked into the MAX graph engine myself so far, so can't help with that. (just about to start learning about it)
Want results from more Discord servers?
Add your server