Nova2k21
Nova2k21
RRunPod
Created by Nova2k21 on 10/21/2024 in #⚡|serverless
Depoying a model which is quantised with bitsandbytes(model config).
I have fintuned a 7B model by quantising in my local machine with 12 GB of VRAM with my custom dataset. And As I went to deploy my model on runpod with vLLM for faster inference. I found only 3 types of quantised model being deployed there namely GPTQ,AWQ and Squeeze LLM. Is there anything I am interpreting wrong or Runpod don't have the feature to deploy model that way? For now is there any other workaround that I can do to deploy my model as of now?
4 replies