naluobliterator443
RRunPod
•Created by naluobliterator443 on 10/8/2024 in #⚡|serverless
Terrible performance - vLLM serverless for MIstral 7B
Hello,
When I serve Mistral-7B quantized in AWQ using a model such as "TheBloke/Mistral-7B-v0.1-AWQ" in the vLLM serverless instance of runpod, I get terrible performance (accuracy) compared to running Mistral 7B on my CPU using ollama (which uses GGUF quantization and Q4_0), could this be due to a misconfiguration by me in the parameters, although I kept the defaults, or is AWQ quantization known to drop the performance that low?
Thank you
1 replies