R
RunPodβ€’8mo ago
md

Run Mixtral 8x22B Instruct on vLLM worker

Hello everybody, is it possible to run mixtral 8x22B on vLLM worker i tried to run it on the default configuration with 48 gb GPU A6000, A40 but its taking too long, what are the requirements for running mixtral 8x22B successfully ? this is the model that im trying to run https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
81 Replies
md
mdOPβ€’8mo ago
Sorry im new to using GPU's for LLM models
nerdylive
nerdyliveβ€’8mo ago
oh it actually needs a bunch of vram to run you can try using half for running it in lower vrams all good
md
mdOPβ€’8mo ago
thanks for the reply what do you mean by half
nerdylive
nerdyliveβ€’8mo ago
in the quantization part use any to run it in lower vrams
md
mdOPβ€’8mo ago
let me check which GPU would be suitable to run this btw ?
nerdylive
nerdyliveβ€’8mo ago
oh wait the DTYPE i mean*
nerdylive
nerdyliveβ€’8mo ago
No description
nerdylive
nerdyliveβ€’8mo ago
also this too
No description
nerdylive
nerdyliveβ€’8mo ago
No description
nerdylive
nerdyliveβ€’8mo ago
try experimenting with those in the env variables of your endpoint
md
mdOPβ€’8mo ago
sure
nerdylive
nerdyliveβ€’8mo ago
Environment variables | RunPod Documentation
Environment variables configure your vLLM Worker by providing control over model selection, access credentials, and operational parameters necessary for optimal Worker performance.
md
mdOPβ€’8mo ago
i think this would also be a good option to set right ? since it will divide the memory
No description
nerdylive
nerdyliveβ€’8mo ago
never tried vllm worker yet tbh, sure if you want to try it go ahead yeah seems like a good option to try
md
mdOPβ€’8mo ago
ah i see, would there be any substantial decrease in quality if i ran the model in half memory ?
nerdylive
nerdyliveβ€’8mo ago
maybe try browsing around on quant, etc
md
mdOPβ€’8mo ago
cool thanks
nerdylive
nerdyliveβ€’8mo ago
Because I don't know about them but yeah i think it will if you use the lower dtype
md
mdOPβ€’8mo ago
yeah makes sense @nerdylive looks like mixtral 8x2b requires up to 300 gb of vram, and the highest available gpu is 80gb of vram if it uses half amount of memory which would be 150 gb it should be possible to divide 50gb of vram between 3 workers. idk if thats possible do you know somebody from the team that can help me out here ? my company actually wants to deploy this model for our product.
nerdylive
nerdyliveβ€’8mo ago
Wew where did you get that estimate from Yeah it's a huge model
md
mdOPβ€’8mo ago
from the mistral discord
nerdylive
nerdyliveβ€’8mo ago
Nah, it's not possible yet to divide them onto 3 workers I think I c Pods is possible to use multiple gpu Or maybe explore accelerate for this ( not sure )
md
mdOPβ€’8mo ago
i see let me search, though could somebody from the runpod team confirm this ?
nerdylive
nerdyliveβ€’8mo ago
Maybe contact support for that Like from the website
md
mdOPβ€’8mo ago
ah ok, i misunderstood this is a community server sorry
nerdylive
nerdyliveβ€’8mo ago
Yeah there are some staffs here but they are easier to manage support request via website support It's fine
md
mdOPβ€’8mo ago
Yeah i will contact through official channels thanks for all the help appreciate it how do i mark this post as solved ?
nerdylive
nerdyliveβ€’8mo ago
Up to you did you find a way to run that model yet? Without the worker splitting idea I'd like to know your updates too haha, I think dont mark it first
md
mdOPβ€’8mo ago
the only way i have right now is to use a vm with 300gb of vram but it would be costly and not sure if i can find a vm like that, i opted for runpod because it had cheap pricing and easy deployments sure i will post updates here one guy in mistral discord also wanted to split memory in order to run the model between 4x gpus they suggested vLLM for this which is what runpod workers are using i think
nerdylive
nerdyliveβ€’8mo ago
Vllm supports that?
md
mdOPβ€’8mo ago
i havent looked into it yet but they suggested it and TGI
nerdylive
nerdyliveβ€’8mo ago
try asking which feature is it or how
md
mdOPβ€’8mo ago
yeah
nerdylive
nerdyliveβ€’8mo ago
Thanks
md
mdOPβ€’8mo ago
@nerdylive this is what i got
No description
md
mdOPβ€’8mo ago
https://docs.mistral.ai/deployment/self-deployment/vllm/ in this guide they set the tensor parallel size to 4 i wonder if runpod does it as well
vLLM | Mistral AI Large Language Models
vLLM can be deployed using a docker image we provide, or directly from the python package.
nerdylive
nerdyliveβ€’8mo ago
Oh I think it's configurable Check the vllm docs on runpod
md
mdOPβ€’8mo ago
let me check
md
mdOPβ€’8mo ago
oh this was the option lol
No description
md
mdOPβ€’8mo ago
i set it to 3 but still ran out of memory i used 2 gpu per worker as well actually 80gb
nerdylive
nerdyliveβ€’8mo ago
Wow it works?
md
mdOPβ€’8mo ago
no i ran out of memory even with the above config
Tobias Fuchs
Tobias Fuchsβ€’8mo ago
I'm trying to do the same thing as you right now lol will update if I figure something out
md
mdOPβ€’8mo ago
Thanks alot
Madiator2011
Madiator2011β€’8mo ago
@Alpay Ariyak mayby you could help here πŸ™‚
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Hi, you need at least 2x80Gb GPUs afaik
md
mdOPβ€’8mo ago
Hey, yes I used 2x 80 GB GPU per worker with 3 workers but I got an error torch.cuda ran out of memory while trying to allocate
nerdylive
nerdyliveβ€’8mo ago
Wait what there is 2x 80gb? I thought it was for 48 gbs only How did you get that Oof still need more memory huh, try sending the full logs
md
mdOPβ€’8mo ago
yeah i will try soon i just selected the option 2 gpu per worker and 80gb H100
Bryan
Bryanβ€’8mo ago
Oh? I can only do 2 GPUs per worker with 48GB GPUs, not 80GB GPUs. Are you sure?
No description
Bryan
Bryanβ€’8mo ago
Unless you're doing a pod instead of serverless In which case ignore me πŸ™‚
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
My apologies, you actually need 4x80gb for 8x22B
nerdylive
nerdyliveβ€’8mo ago
is that actually possible in serverless
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Not with the current limits, no
nerdylive
nerdyliveβ€’8mo ago
ah that sucks alright btw, what are streams transported in? sse? how do i retrieve it on python and iterate the responses async
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
With OpenAI compatibility?
nerdylive
nerdyliveβ€’8mo ago
No the stream endpoint default one {{URL}}/stream/:id
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Not sse, regular get request Will return yielded outputs from the worker since last /stream call
nerdylive
nerdyliveβ€’8mo ago
wait so i poll the stream endpoint?
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
What goal do you have in mind?
nerdylive
nerdyliveβ€’8mo ago
Like websockets probably? i was hoping the stream endpoint to be alike
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
OpenAI compatibility streaming is through SSE
richterscale9
richterscale9β€’8mo ago
Hey, sorry to hijack the thread, I'm also looking into deploying vLLM on RunPod serverless. The landing page indicates that it should be possible to bring your own container, not pay for any idle time, and have <250ms cold boot. Is this true? It sounds too good to be true.
nerdylive
nerdyliveβ€’8mo ago
Oh what about the stream im talking about?
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Yes, through flash boot That one is strictly polled
nerdylive
nerdyliveβ€’8mo ago
Oh alright
richterscale9
richterscale9β€’8mo ago
Does this 250ms cold boot time really include everything? Or does it only contain some things, such that the actual cold boot time might be 30 seconds or something? For example, the time to load LLM weights into memory typically takes more than 10 seconds.
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Everything, due to not needing to reload weights
richterscale9
richterscale9β€’8mo ago
That's just insane if it really works
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Haha try it out!
richterscale9
richterscale9β€’8mo ago
Yeah, reading the docs right now to figure out what is everything i need to do to try it... I currently have a Docker image that spins up a fork of oobabooga web ui, I'm thinking about setting that up for the serverless experiment.
md
mdOPβ€’8mo ago
Yeah your actually right, i confused it as 80gb my bad guys even with using dtype half ? we need 4x80 gb ?
Bryan
Bryanβ€’8mo ago
8x22B = 176B parameters. At 16bit, 2 bytes per parameter, that's 352GB just for the model parameters At 8bit (1 byte per parameter) it's still 176GB I could be mistaken around this, I'm not an expert on this for sure But my understanding is that you can just fit 8x22B on 4x80GB with 8bit quantization
md
mdOPβ€’8mo ago
I see yeah that makes sense i will revisit this in the future
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
We’re raising the serverless gpu count limits around next week I believe even up to 10x A40 per worker
nerdylive
nerdyliveβ€’8mo ago
WOOOO on other gpus too?
Alpay Ariyak
Alpay Ariyakβ€’8mo ago
Yes, 2x of everything at the very least iirc
nerdylive
nerdyliveβ€’8mo ago
yay~!
md
mdOPβ€’8mo ago
nice this will be useful thanks alot
nerdylive
nerdyliveβ€’8mo ago
Hey @Alpay Ariyak just wondering is it really normal for vllm to load big models everytime very sloow Like every request is 100secs ++ Like this mixtral or the llama 3 70b Any solutions to make that loading faster?
Want results from more Discord servers?
Add your server