star0129
star0129
RRunPod
Created by star0129 on 11/20/2024 in #⚡|serverless
LoRA path in vLLM serverless template
I want to attach a custom LoRA adapter to Llama-3.1-70B model. Usually while using vLLM, after the --enable-lora we also specify the --lora-modules name=lora_adapter_path, something like this. But in the template, it only gives option to enable LoRA, where do I add the path of the LoRA adapter?
1 replies