LoRA path in vLLM serverless template
I want to attach a custom LoRA adapter to Llama-3.1-70B model. Usually while using vLLM, after the --enable-lora we also specify the --lora-modules name=lora_adapter_path, something like this. But in the template, it only gives option to enable LoRA, where do I add the path of the LoRA adapter?
3 Replies
Also wondering - any luck @star0129
i get
"2024-12-02T18:22:06.702245094Z NotImplementedError: LoRA is currently not currently supported with encoder/decoder models."

Maybe it is not supported yet?
Have I posted something like onto a github issue link about this in another #⛅|pods
Or another in #⚡|serverless