Hello, I am trying to upload `adapter_
Hello, I am trying to upload
adapter_model.safetensors
to a created finetune and got error, see thread1 Reply
adapter config
error log
I used huggingface trl and peft to finetune mistral instruct v0.2, and had made sure rank is set to 8 and non-quantized. Why am I still getting an error uploading the safetensor? (41MB)
looks like target modules can only be
for now, would be great to add that to documentation; I had to scrape discord to get a maybe answer
Wondering about the timeline to support the rest of the layers