8 Replies
If you use a template with oobabooga installed, it can download it for you.
Yeah but I want to run it with ollama via a modelfile
U can run ollama normally
run the installation script for linux and the start ollama server
and will work fine
use a pytorch template as a starting point to launch a pod
and then just use the terminal
thro the jupyter notebook server
But what do I need to write on the model file ?
Normally it's FROM XXX but here there is multiple files
No clue, i dont use custom models, but im sure there are yt videos on it / docs on it
https://youtu.be/0ou51l-MLCo?si=OiedA2tChtvd5PDG
just giving a complete shot in the dark - i havent watched this
Matt Williams
YouTube
Adding Custom Models to Ollama
I bet you have always wanted to have an emoji model. Even if you haven't, this video will show you how to make your own Ollama models.
Here is the docker command I mentioned.
docker run --rm -v .:/model ollama/quantize -q q4_0 /model
but i see other similar videos
The issue is that in every video they show only with a single GGUF model but not multi file models.