Run multiple finetuning on same GPU POD

I am using - image: runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04 - GPU: 1 x A40 While running qlora finetuning with 4 bit quantization the GPU uses approx 12 GB GPU Memory out of 48 GB, how can I run multiple finetunings simultaneously (in parallel) on the same POD GPU?
5 Replies
digigoblin
digigoblin4w ago
Depends on your application you're using.
Asad Jamal Cognify
Okay but how? I am using python for running the finetunings
nerdylive
nerdylive4w ago
Hmm... Okay how do you connect to the gpu then? What framework do you use? And search it on google
Asad Jamal Cognify
I have a script that is has the address to model, tokens, output directory and dataset. Lets say I manually run it once and the finetuning starts Then I change the values of output dir and dataset to perform another finetuning Will the POD GPU be able to handle it properly? huggingface, torch and transformers
nerdylive
nerdylive4w ago
Sure why not? Depends on the available resources.. If it has enough then it will run smoothly Check torch, how to use multiple gpus on google