Model Maximum Context Length Error
Hi there, I run an AI chat site (https://www.hammerai.com). I was previously using vLLM serverless, but switched over to using dedicated Pods with the vLLM template (
Container Image: vllm/vllm-openai:latest
. Here is my configuration:
I then call it with:
12 Replies
But am now running into a new error:
I didn't see this when using the serverless endpoints. So my question:
- Is there something I can be setting on vLLM to automatically manage the context length for me? I.e. to delete tokens from the
prompt
or messages
automatically for me? Or do I need to manage this myself?
Thanks!--max-model-len 4096 --max-seq-len-to-capture 4096
i guess its those two arguments ( your vllm start arguments)
Yep, but won't it just default to something else even if I don't set those? And then we'll run into the same issue at whatever number of tokens that is?
Yes, you got to set it higher
Yep, set to a number that will be your max length just estimate it or add more than your estimate
Bigger context length requires more vram btw
Yes, but when I do that, specifically setting to 8192, I get a separate error saying that I have exceeded the maximum context length. But in general, even if I manage to set it a little higher, won't I run into the same problem then?
oh..
then its the model that you used
has limits on context length
can i see the error too?
maybe copy and paste the log
Unfortunately I didn't save it and Runpod logs don't go back that far - but I guess doesn't it not really matter as long as we have to set a max limit? Because in a chat application we'll eventually go past it.
yeah use another model that can handle more context length
i think its set from the model
Got it - so vLLM doesn't help with truncating things? I just asked b/c coming from Ollama it will automatically update your prompt so that it continues to work even past the max context length.
hmm not sure, i think not
Got it. So do you know other AI chat sites handle this? Does everyone just write custom code if they're using Runpod vLLM?
hmm, there might be some libraries / frameworks out there that helps with this but, i think other people dont use models with shorter context length unless thats not really important