R
RunPod4mo ago
star0129

LoRA path in vLLM serverless template

I want to attach a custom LoRA adapter to Llama-3.1-70B model. Usually while using vLLM, after the --enable-lora we also specify the --lora-modules name=lora_adapter_path, something like this. But in the template, it only gives option to enable LoRA, where do I add the path of the LoRA adapter?
3 Replies
Blake
Blake4mo ago
Also wondering - any luck @star0129
Blake
Blake4mo ago
i get "2024-12-02T18:22:06.702245094Z NotImplementedError: LoRA is currently not currently supported with encoder/decoder models."
No description
nerdylive
nerdylive3mo ago
Maybe it is not supported yet? Have I posted something like onto a github issue link about this in another #⛅|pods Or another in #⚡|serverless

Did you find this page helpful?