OSError in vLLM worker; issues when its new update was released

I was using vLLM worker 1.7.0 and everything was working fine till yesterday. Today I am facing issues in all of my endpoints where huggingface models are deployed using the vLLM worker. Runpod logs shows OSError and the model cant be identified. I then deployed a new endpoint with latest configuration of vLLM worker 1.9 and everything worked in the way it used to. @Justin Merrell Runpod should let us know its changes atleast, so it does not affect the endpoints in production.
No description
2 Replies
Poddy
Poddy4w ago
@Ashique A B
Escalated To Zendesk
The thread has been escalated to Zendesk!
nerdylive
nerdylive4w ago
where do you store the model in?

Did you find this page helpful?