Jacob
RRunPod
•Created by Gabriel Bianconi on 5/14/2024 in #⚡|serverless
"Error saving template. Please contact support or try again later." when using vLLM Quick Deploy
Hey Gabriel! I believe to found the issue. We used the model name as a prefix for template name but it caused issues when the model name is too long (for example llama-3-8b-instruct)
A fix will be rolling out in the next few days. Sorry for the inconvenience!
5 replies