Volko
RRunPod
•Created by Morris on 4/17/2024 in #⚡|serverless
idle time duration
I got the anwser, it's certainly because you activated Active Workers.
It's 40% cheaper but run always
9 replies
RRunPod
•Created by Volko on 4/17/2024 in #⚡|serverless
Why is my endpoint running ? I don't have any questions and the time idle is set to 1 sec
Okay so the reason why is because I enabled Active Workers
3 replies
RRunPod
•Created by Morris on 4/17/2024 in #⚡|serverless
idle time duration
With serverless vLLM (i set the idle 1sec)
9 replies
RRunPod
•Created by Morris on 4/17/2024 in #⚡|serverless
idle time duration
Got the same issue
9 replies
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
Oh and the Ada ones have 20gb VRAM 50 gb RAM and 9vCPU each
And the non ada have 16Gb vram 23gb ram 6vcpu each
But the training is almost exclusively on GPU right ? And it was a small model so no issues with VRAM
23 replies
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
?
Strange because yesterday, I got no answer so I tried on my own on runpod and 2 * A4000 go 2 times faster than 1 A 4000 for training process.
I trained an open llama 3B and it's 10h on 1 A4000 and 5h on 2 A4000
23 replies