Yebs
Yebs
RRunPod
Created by Yebs on 3/17/2025 in #⚡|serverless
Can you now run gemma 3 in the vllm container?
Ollama
25 replies
RRunPod
Created by Yebs on 3/17/2025 in #⚡|serverless
Can you now run gemma 3 in the vllm container?
I used ollana
25 replies
RRunPod
Created by ammar on 3/17/2025 in #⚡|serverless
Ollama serverless?
is it as good as the runpod vllm template? in terms of performance and concurrecy stuff
7 replies
RRunPod
Created by Yebs on 3/17/2025 in #⚡|serverless
Can you now run gemma 3 in the vllm container?
I used the preset vllm, llama 3.2b worked but the new gemma 3 didnt
25 replies
RRunPod
Created by Yebs on 3/17/2025 in #⚡|serverless
Can you now run gemma 3 in the vllm container?
I deleted it but it seems bc gemma3 is a new model so transformer is relatively outdated afaik?
25 replies
RRunPod
Created by Yebs on 3/12/2025 in #⚡|serverless
roll out progress taking a while
8 replies
RRunPod
Created by Yebs on 3/12/2025 in #⚡|serverless
roll out progress taking a while
I did that but ya know
8 replies
RRunPod
Created by Yebs on 3/12/2025 in #⚡|serverless
roll out progress taking a while
+ I have like 5 throttle at this serverless like oh my god
8 replies
RRunPod
Created by Yebs on 3/12/2025 in #⚡|serverless
roll out progress taking a while
idk a few gb only probably, most of our models are in network volume
8 replies
RRunPod
Created by Yebs on 2/28/2025 in #⚡|serverless
Getting executiontimeout exceeded
@PRB
6 replies
RRunPod
Created by Yebs on 2/28/2025 in #⚡|serverless
Getting executiontimeout exceeded
a
6 replies