Workers configuration for Serverless vLLM endpoints: 1 hour lecture with 50 students

Hey there, I need to showcase 50 students how to do RAG with open-source LLMs (i.e., LLama3). Which type of configuration do you suggest? I wanna make sure they have a smooth experience. Thanks!
No description
Solution:
16GB isn't enough, you need 24GB
Jump to solution
11 Replies
digigoblin
digigoblin7mo ago
Depends on which LLama3 model
Madiator2011
Madiator20117mo ago
for 70b non quant you would need at least 2x80GB
nerdylive
nerdylive7mo ago
Or 8x 24 works Why don't use pods btw?
digigoblin
digigoblin7mo ago
Pods are expensive
nerdylive
nerdylive7mo ago
Ic
__den3b__
__den3b__OP7mo ago
8b params can also suffice
nerdylive
nerdylive7mo ago
1x 24gb vram gpu works, 16gb might work aswell
Solution
digigoblin
digigoblin7mo ago
16GB isn't enough, you need 24GB
digigoblin
digigoblin7mo ago
Unless you use a quantized version
nerdylive
nerdylive7mo ago
Oh
Want results from more Discord servers?
Add your server