Volko
Volko
RRunPod
Created by Morris on 4/17/2024 in #⚡|serverless
idle time duration
I got the anwser, it's certainly because you activated Active Workers. It's 40% cheaper but run always
9 replies
RRunPod
Created by Volko on 4/17/2024 in #⚡|serverless
Why is my endpoint running ? I don't have any questions and the time idle is set to 1 sec
Okay so the reason why is because I enabled Active Workers
3 replies
RRunPod
Created by Morris on 4/17/2024 in #⚡|serverless
idle time duration
With serverless vLLM (i set the idle 1sec)
9 replies
RRunPod
Created by Morris on 4/17/2024 in #⚡|serverless
idle time duration
Got the same issue
9 replies
RRunPod
Created by Volko on 4/17/2024 in #⛅|pods
is AWQ faster than GGUF ?
Okay thanks
10 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
Yeah it seems
23 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
And if I remember well, I think I saw a 99% utilization on the 2 GPUs in the dashboard of runpod
23 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
Oh and the Ada ones have 20gb VRAM 50 gb RAM and 9vCPU each And the non ada have 16Gb vram 23gb ram 6vcpu each But the training is almost exclusively on GPU right ? And it was a small model so no issues with VRAM
23 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
It seems
23 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
The only difference is that I cannot rent any 2A4000 so I rent 2A4000 ada (~7% better performance)
23 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
? Strange because yesterday, I got no answer so I tried on my own on runpod and 2 * A4000 go 2 times faster than 1 A 4000 for training process. I trained an open llama 3B and it's 10h on 1 A4000 and 5h on 2 A4000
23 replies
RRunPod
Created by Volko on 4/11/2024 in #⛅|pods
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
Yes it does
23 replies
RRunPod
Created by Volko on 4/10/2024 in #⛅|pods
Download Mixtral from HuggingFace
The issue is that in every video they show only with a single GGUF model but not multi file models.
15 replies
RRunPod
Created by Volko on 4/10/2024 in #⛅|pods
Download Mixtral from HuggingFace
But what do I need to write on the model file ? Normally it's FROM XXX but here there is multiple files
15 replies
RRunPod
Created by Ansatsu on 4/10/2024 in #⛅|pods
Operation not permitted - Sudo access missing
Strange
43 replies
RRunPod
Created by Ansatsu on 4/10/2024 in #⛅|pods
Operation not permitted - Sudo access missing
?
43 replies
RRunPod
Created by Ansatsu on 4/10/2024 in #⛅|pods
Operation not permitted - Sudo access missing
Quit your page and click again on "connect to web terminal" and send a screenshot
43 replies
RRunPod
Created by Ansatsu on 4/10/2024 in #⛅|pods
Operation not permitted - Sudo access missing
Try su -
43 replies
RRunPod
Created by Ansatsu on 4/10/2024 in #⛅|pods
Operation not permitted - Sudo access missing
No description
43 replies
RRunPod
Created by Ansatsu on 4/10/2024 in #⛅|pods
Operation not permitted - Sudo access missing
Try to connect to the root
43 replies