Serverless deepseek-ai/DeepSeek-R1 setup?
How can I configure a serverless end point for deepseek-ai/DeepSeek-R1?
75 Replies
does vllm supports that model?
if not, you can make a model that can run inference for that model
Basic config, 2 GPU count


Once it is running, I try the default hello world request and it just gets stuck IN_QUEUE for 8 minutes..
Can you check logs maybe its still downloading
or OOM
wait.. how big is the model?
seems like r1 is a really huge model isnt it?
yes, but I tried even just following along with the youtube tutorial here and got the same IN_QUEUE problem...: https://youtu.be/0XXKK82LwWk?si=ZDCu_YV39Eb5Fn8A
RunPod
YouTube
Set Up A Serverless LLM Endpoint Using vLLM In Six Minutes on RunPod
Guide to setting up a serverless endpoint on RunPod in six minutes on RunPod.
Any logs?
in your workers or endpoint?
Oh, wait!! I just ran the 1.5B model and got this response:

When I tried running the larger model, I got errors about not enough memory
""Uncaught exception | <class 'torch.OutOfMemoryError'>; CUDA out of memory. Tried to allocate 3.50 GiB. GPU 0 has a total capacity of 44.45 GiB of which 1.42 GiB is free"
seems like you got oom ya..
So how do I configure ?
r1 is such a huge model seems like you need 1tb+ vram
don't know how to calculate, but est maybe something in range of 700gb+ vram
wow
so it's not really an option to deploy?..
not sure, depends for your use hahah
I mean, Deepseek offers their own API keys
I thought it could be more cost effective to just run a serverless endpoint here but..
only if you got enough volume, especially for bigger models imo
hmm.. I see
Thanks for your help
your welcome bro
Hey @nerdylive i still can deploy the 7B deepseek R1 model right instead of huge model. ?

I am facing this issue
I am not that good in resolving issues.
Did you find a solution ?
Not yet...
use trust remote code = true

where should i put this
in envrinment
env variable
like this

Is the model you are trying to run a GGUF quant? You'll need a custom script for GGUF quants or if there is multiple models in a single repo
I dont understand , this morning I to do a brief test with https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B + a 24gb Vram gpu, but now I got a error cuda memory , do you know guy's how I can fix this issue ?

try 48GB gpu, see if that helps.
Hello there, I increased the max token settings but still getting only the beginning of the thinking, how can I fix that

yep fixed thanks
set max tokens to mroe than 16
in your request, or use a openai client sdk
Thanks ! Will let you know if it’s work
Yep increase to 3000 but still getting a short " thinking " answer 😦
How did you configure it
basically used this model casperhansen/deepseek-r1-distill-qwen-32b-awq with vllm and runpod serverless, except lower the model max lenght to 11000 I didnt modify any others settings
my input look like this now :
{
"input": {
"messages": [
{
"role": "system",
"content": "Your are an ai assistant."
},
{
"role": "user",
"content": "Explain llm models"
}
],
"max_tokens": 3000,
"temperature": 0.7,
"top_p": 0.95,
"n": 1,
"stream": false,
"stop": [],
"presence_penalty": 0,
"frequency_penalty": 0,
"logit_bias": {},
"user": "utilisateur_123",
"best_of": 1,
"echo": false
}
}
Not the correct way
Ah ok, do you have an example of correct input for this model ?
was going to give an example after this
wait
like this, inside sampling_params
if not just use openai sdk, its easier (the docs are easily accessible in openai's site) hahah
hm im not very familiar with the openai sdk, is it something to configure during the creation of the serverless endpoint ( with vllm ) ?
https://api.runpod.ai/v2/<YOUR ENDPOINT ID>/openai/v1/chat/completions
no, for the client only
you can use packages from openai ( for the client) to connect using that url
your endpoint id should be replaced with your endpoint id
and use your runpod api key as the auth in the openai client
try reading the docs in runpod website, the vllm worker part
Nice thanks you for theses infos
Ya that's if you use http request directly
Yep, I basically create a template from https://github.com/runpod-workers/worker-vllm then modify models etc. from env variables right and also modify the few lines of code for be able to call the openai api
GitHub
GitHub - runpod-workers/worker-vllm: The RunPod worker template for...
The RunPod worker template for serving our large language model endpoints. Powered by vLLM. - runpod-workers/worker-vllm
Can you check the cloudflare proxy (not in serverless) for vllm openai compatible servers? Batched requests keep getting aborted only on proxied connections (not on direct using tcp forwarding(?)).
Related Github Issue: https://github.com/vllm-project/vllm/issues/2484
When the problem happens, the logs look something like this:
GitHub
Aborted request without reason · Issue #2484 · vllm-project/vllm
Hi, i am trying to load test vllm on a single gpu with 20 concurrent request. Each request would pass through the llm engine twice. Once to change the prompt, the other to generate the output. Howe...
Is it streaming request? How long is your request?
What's batched request?
Can you open a ticket
1. doesnt abort on streaming requests
2. about 16K tokens?
3. Its in langchain's vllm openai-compatible api sdk (just sends <batch size> requests to the api endpoint at the same time
Also that sdk in langchain doesn't support streaming requests in batch mode
Open a ticket from the contact button, tell these details + your endpoint id
I see
It doesn't abort on streaming means there might be some timeout here that's limiting it
Can i do it tomorrow..?
So no response and it's aborted by the proxy or smth
yeah i think so too
Or Lang chain client
on that github issue,
Sure best to do it now tho, they might take a longer time to respond
ppl have problems with nginix or some kind of proxy in front of the server
unfortunately i removed the endpoint & pod with the issue
Yeah. You can check your audit logs maybe and tell them it's deleted
In website
thx for the info!
Your welcome!
It was a cloudflare problem that's on the blog here.
https://blog.runpod.io/when-to-use-or-not-use-the-proxy-on-runpod/
btw does serverless use cloudflare proxies too?
RunPod Blog
When to Use (Or Not Use) RunPod's Proxy
RunPod uses a proxy system to ensure that you have easy accessibility to your pods without needing to make any configuration changes. This proxy utilizes Cloudflare for ease of both implementation and access, which comes with several benefits and drawbacks. Let's go into a little explainer about specifically how the
If so, how do i run long-running requests on serverless without streaming?
I'm not sure ask In the ticket ya
you can stream serverless without worrying about request times, look into streaming section, also serverless max timeout is 5mins, proxy is about 90s
Is they’re any difference between using the fast deployment > vllm or using pre built the docker image
Quick deploy right? You can configure it before deploying using a setup
Yep exact but you can also pre configure the pre built docker image from the env variables right ?
Yep
Or from your end point's env variable
Ok 🙂 about my issue with DeepSeek distilled r1, seems the prompt system is weird and tricky to use, if anyone know a good uncensored model to use vllm let me know ( I’m using llama 3.3 but it’s too censored )
Find out some fine tuned model like the dolphin one, i forgot the name
is it a finetuned from llama model ?
Ya
ok:)
Need a link?
Thanks, will try the cognitivecomputations/Dolphin3.0-Llama3.2-3B