Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
13 Replies
Hmm does it supports multi gpu fine tuning?
software like accelerate does.
GitHub
axolotl/FAQS.md at main · OpenAccess-AI-Collective/axolotl
Go ahead and axolotl questions. Contribute to OpenAccess-AI-Collective/axolotl development by creating an account on GitHub.
Yes it does
?
Strange because yesterday, I got no answer so I tried on my own on runpod and 2 * A4000 go 2 times faster than 1 A 4000 for training process.
I trained an open llama 3B and it's 10h on 1 A4000 and 5h on 2 A4000
Oh really all same inputs and same specs, same everything except the 2 gpus
im unable to explain that then hahah
FAQ is outdated
"Can you train StableLM with this? Yes, but only with a single GPU atm. Multi GPU support is coming soon! Just waiting on this PR" but the PR was already merged a year ago.
there you go hahah
thanks
so now it supports multi gpu?
According to the FAQ it should once the PR is merged, and the PR is merged so apparently, but I honestly don't know. Seems it does according to what @Volko has observed.
ah ye
The only difference is that I cannot rent any 2A4000 so I rent 2A4000 ada (~7% better performance)
Solution
It seems
Oh and the Ada ones have 20gb VRAM 50 gb RAM and 9vCPU each
And the non ada have 16Gb vram 23gb ram 6vcpu each
But the training is almost exclusively on GPU right ? And it was a small model so no issues with VRAM
And if I remember well, I think I saw a 99% utilization on the 2 GPUs in the dashboard of runpod
ah
yea then that works multi gpu
Yeah it seems