How to override ollama/ollama image to run a model at startup

Hi, I´m trying to run pods using the ollama template (ollama/ollama) and trying to override the default template to during pod creating serving the model that I want. I tried to use ./bin/ollama serve && ollama run llama3.1:8b command into "container start command" but it doesn´t work. Any way to do this? Thanks!
1 Reply
justin
justin3w ago
https://discord.com/channels/912829806415085598/1261031850651029524 I recommend to look here Just b/c one, using Ollama seems to require some configurations to get it workign with GPU properly 2) Uh maybe a bash script where it starts the server in the background on startup and starts the download of the model, or change the path of where it checks to ur network volume. I personally couldn't ever figure out how to bake an ollama model into my template
Want results from more Discord servers?
Add your server