Run multiple finetuning on same GPU POD

I am using - image: runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04 - GPU: 1 x A40 While running qlora finetuning with 4 bit quantization the GPU uses approx 12 GB GPU Memory out of 48 GB, how can I run multiple finetunings simultaneously (in parallel) on the same POD GPU?
5 Replies
digigoblin
digigoblin7mo ago
Depends on your application you're using.
Asad Cognify
Asad CognifyOP7mo ago
Okay but how? I am using python for running the finetunings
nerdylive
nerdylive7mo ago
Hmm... Okay how do you connect to the gpu then? What framework do you use? And search it on google
Asad Cognify
Asad CognifyOP7mo ago
I have a script that is has the address to model, tokens, output directory and dataset. Lets say I manually run it once and the finetuning starts Then I change the values of output dir and dataset to perform another finetuning Will the POD GPU be able to handle it properly? huggingface, torch and transformers
nerdylive
nerdylive7mo ago
Sure why not? Depends on the available resources.. If it has enough then it will run smoothly Check torch, how to use multiple gpus on google
Want results from more Discord servers?
Add your server