R
RunPodβ€’2mo ago
md

Run Mixtral 8x22B Instruct on vLLM worker

Hello everybody, is it possible to run mixtral 8x22B on vLLM worker i tried to run it on the default configuration with 48 gb GPU A6000, A40 but its taking too long, what are the requirements for running mixtral 8x22B successfully ? this is the model that im trying to run https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
81 Replies
md
mdβ€’2mo ago
Sorry im new to using GPU's for LLM models
nerdylive
nerdyliveβ€’2mo ago
oh it actually needs a bunch of vram to run you can try using half for running it in lower vrams all good
md
mdβ€’2mo ago
thanks for the reply what do you mean by half
nerdylive
nerdyliveβ€’2mo ago
in the quantization part use any to run it in lower vrams
md
mdβ€’2mo ago
let me check which GPU would be suitable to run this btw ?
nerdylive
nerdyliveβ€’2mo ago
oh wait the DTYPE i mean*
nerdylive
nerdyliveβ€’2mo ago
No description
nerdylive
nerdyliveβ€’2mo ago
also this too
No description
nerdylive
nerdyliveβ€’2mo ago
No description
nerdylive
nerdyliveβ€’2mo ago
try experimenting with those in the env variables of your endpoint
md
mdβ€’2mo ago
sure
nerdylive
nerdyliveβ€’2mo ago
Environment variables | RunPod Documentation
Environment variables configure your vLLM Worker by providing control over model selection, access credentials, and operational parameters necessary for optimal Worker performance.
md
mdβ€’2mo ago
i think this would also be a good option to set right ? since it will divide the memory
No description
nerdylive
nerdyliveβ€’2mo ago
never tried vllm worker yet tbh, sure if you want to try it go ahead yeah seems like a good option to try
md
mdβ€’2mo ago
ah i see, would there be any substantial decrease in quality if i ran the model in half memory ?
nerdylive
nerdyliveβ€’2mo ago
maybe try browsing around on quant, etc
md
mdβ€’2mo ago
cool thanks
nerdylive
nerdyliveβ€’2mo ago
Because I don't know about them but yeah i think it will if you use the lower dtype
md
mdβ€’2mo ago
yeah makes sense @nerdylive looks like mixtral 8x2b requires up to 300 gb of vram, and the highest available gpu is 80gb of vram if it uses half amount of memory which would be 150 gb it should be possible to divide 50gb of vram between 3 workers. idk if thats possible do you know somebody from the team that can help me out here ? my company actually wants to deploy this model for our product.
nerdylive
nerdyliveβ€’2mo ago
Wew where did you get that estimate from Yeah it's a huge model
md
mdβ€’2mo ago
from the mistral discord
nerdylive
nerdyliveβ€’2mo ago
Nah, it's not possible yet to divide them onto 3 workers I think I c Pods is possible to use multiple gpu Or maybe explore accelerate for this ( not sure )
md
mdβ€’2mo ago
i see let me search, though could somebody from the runpod team confirm this ?
nerdylive
nerdyliveβ€’2mo ago
Maybe contact support for that Like from the website
md
mdβ€’2mo ago
ah ok, i misunderstood this is a community server sorry
nerdylive
nerdyliveβ€’2mo ago
Yeah there are some staffs here but they are easier to manage support request via website support It's fine
md
mdβ€’2mo ago
Yeah i will contact through official channels thanks for all the help appreciate it how do i mark this post as solved ?
nerdylive
nerdyliveβ€’2mo ago
Up to you did you find a way to run that model yet? Without the worker splitting idea I'd like to know your updates too haha, I think dont mark it first
md
mdβ€’2mo ago
the only way i have right now is to use a vm with 300gb of vram but it would be costly and not sure if i can find a vm like that, i opted for runpod because it had cheap pricing and easy deployments sure i will post updates here one guy in mistral discord also wanted to split memory in order to run the model between 4x gpus they suggested vLLM for this which is what runpod workers are using i think
nerdylive
nerdyliveβ€’2mo ago
Vllm supports that?
md
mdβ€’2mo ago
i havent looked into it yet but they suggested it and TGI
nerdylive
nerdyliveβ€’2mo ago
try asking which feature is it or how
md
mdβ€’2mo ago
yeah
nerdylive
nerdyliveβ€’2mo ago
Thanks
md
mdβ€’2mo ago
@nerdylive this is what i got
No description
md
mdβ€’2mo ago
https://docs.mistral.ai/deployment/self-deployment/vllm/ in this guide they set the tensor parallel size to 4 i wonder if runpod does it as well
vLLM | Mistral AI Large Language Models
vLLM can be deployed using a docker image we provide, or directly from the python package.
nerdylive
nerdyliveβ€’2mo ago
Oh I think it's configurable Check the vllm docs on runpod
md
mdβ€’2mo ago
let me check
md
mdβ€’2mo ago
oh this was the option lol
No description
md
mdβ€’2mo ago
i set it to 3 but still ran out of memory i used 2 gpu per worker as well actually 80gb
nerdylive
nerdyliveβ€’2mo ago
Wow it works?
md
mdβ€’2mo ago
no i ran out of memory even with the above config
Tobias Fuchs
Tobias Fuchsβ€’2mo ago
I'm trying to do the same thing as you right now lol will update if I figure something out
md
mdβ€’2mo ago
Thanks alot
Madiator2011
Madiator2011β€’2mo ago
@Alpay Ariyak mayby you could help here πŸ™‚
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Hi, you need at least 2x80Gb GPUs afaik
md
mdβ€’2mo ago
Hey, yes I used 2x 80 GB GPU per worker with 3 workers but I got an error torch.cuda ran out of memory while trying to allocate
nerdylive
nerdyliveβ€’2mo ago
Wait what there is 2x 80gb? I thought it was for 48 gbs only How did you get that Oof still need more memory huh, try sending the full logs
md
mdβ€’2mo ago
yeah i will try soon i just selected the option 2 gpu per worker and 80gb H100
Bryan
Bryanβ€’2mo ago
Oh? I can only do 2 GPUs per worker with 48GB GPUs, not 80GB GPUs. Are you sure?
No description
Bryan
Bryanβ€’2mo ago
Unless you're doing a pod instead of serverless In which case ignore me πŸ™‚
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
My apologies, you actually need 4x80gb for 8x22B
nerdylive
nerdyliveβ€’2mo ago
is that actually possible in serverless
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Not with the current limits, no
nerdylive
nerdyliveβ€’2mo ago
ah that sucks alright btw, what are streams transported in? sse? how do i retrieve it on python and iterate the responses async
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
With OpenAI compatibility?
nerdylive
nerdyliveβ€’2mo ago
No the stream endpoint default one {{URL}}/stream/:id
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Not sse, regular get request Will return yielded outputs from the worker since last /stream call
nerdylive
nerdyliveβ€’2mo ago
wait so i poll the stream endpoint?
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
What goal do you have in mind?
nerdylive
nerdyliveβ€’2mo ago
Like websockets probably? i was hoping the stream endpoint to be alike
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
OpenAI compatibility streaming is through SSE
richterscale9
richterscale9β€’2mo ago
Hey, sorry to hijack the thread, I'm also looking into deploying vLLM on RunPod serverless. The landing page indicates that it should be possible to bring your own container, not pay for any idle time, and have <250ms cold boot. Is this true? It sounds too good to be true.
nerdylive
nerdyliveβ€’2mo ago
Oh what about the stream im talking about?
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Yes, through flash boot That one is strictly polled
nerdylive
nerdyliveβ€’2mo ago
Oh alright
richterscale9
richterscale9β€’2mo ago
Does this 250ms cold boot time really include everything? Or does it only contain some things, such that the actual cold boot time might be 30 seconds or something? For example, the time to load LLM weights into memory typically takes more than 10 seconds.
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Everything, due to not needing to reload weights
richterscale9
richterscale9β€’2mo ago
That's just insane if it really works
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Haha try it out!
richterscale9
richterscale9β€’2mo ago
Yeah, reading the docs right now to figure out what is everything i need to do to try it... I currently have a Docker image that spins up a fork of oobabooga web ui, I'm thinking about setting that up for the serverless experiment.
md
mdβ€’2mo ago
Yeah your actually right, i confused it as 80gb my bad guys even with using dtype half ? we need 4x80 gb ?
Bryan
Bryanβ€’2mo ago
8x22B = 176B parameters. At 16bit, 2 bytes per parameter, that's 352GB just for the model parameters At 8bit (1 byte per parameter) it's still 176GB I could be mistaken around this, I'm not an expert on this for sure But my understanding is that you can just fit 8x22B on 4x80GB with 8bit quantization
md
mdβ€’2mo ago
I see yeah that makes sense i will revisit this in the future
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
We’re raising the serverless gpu count limits around next week I believe even up to 10x A40 per worker
nerdylive
nerdyliveβ€’2mo ago
WOOOO on other gpus too?
Alpay Ariyak
Alpay Ariyakβ€’2mo ago
Yes, 2x of everything at the very least iirc
nerdylive
nerdyliveβ€’2mo ago
yay~!
md
mdβ€’2mo ago
nice this will be useful thanks alot
nerdylive
nerdyliveβ€’2mo ago
Hey @Alpay Ariyak just wondering is it really normal for vllm to load big models everytime very sloow Like every request is 100secs ++ Like this mixtral or the llama 3 70b Any solutions to make that loading faster?