Workers configuration for Serverless vLLM endpoints: 1 hour lecture with 50 students

Hey there, I need to showcase 50 students how to do RAG with open-source LLMs (i.e., LLama3). Which type of configuration do you suggest? I wanna make sure they have a smooth experience. Thanks!
No description
Solution:
16GB isn't enough, you need 24GB
Jump to solution
11 Replies
digigoblin
digigoblin9mo ago
Depends on which LLama3 model
Madiator2011
Madiator20119mo ago
for 70b non quant you would need at least 2x80GB
nerdylive
nerdylive9mo ago
Or 8x 24 works Why don't use pods btw?
digigoblin
digigoblin9mo ago
Pods are expensive
nerdylive
nerdylive9mo ago
Ic
__den3b__
__den3b__OP9mo ago
8b params can also suffice
nerdylive
nerdylive9mo ago
1x 24gb vram gpu works, 16gb might work aswell
Solution
digigoblin
digigoblin9mo ago
16GB isn't enough, you need 24GB
digigoblin
digigoblin9mo ago
Unless you use a quantized version
nerdylive
nerdylive9mo ago
Oh

Did you find this page helpful?