Training AI with a RunPod GPU
Hi, I'm pretty new to all this AI stuff and cloud GPU and i'm currently trying to create an AI. I'm trying to train a yolov8xl model with a dataset of about 100k images and 31 class and because it's a big project, my GPU cannot handle such a massive project or it will be really slow. So, I wanted to use an Nvidia A6000 to train my model but I really don't understand how does it work, I even asked chatGPT that told me that i needed to import my dataset into runpod but i don't see anything to import and i tought that normaly i should import the A6000 to my computer but I don't know how. Also I don't know what template i shoud use. Could somebody help me ?
3 Replies
You basically need a. Training code for that model
Then import your dataset into runpod pod's storage
You cant import cloud GPU (technically possible to connect but not for this usage) into your computer
Usually they control the cloud GPU server ( like in runpod) from your computer then run the training app / process there
I'd suggest learning concepts that will be used here before the basics before you do the real training or else it'll lead to unexpected problems
I already have my code and already did some training on my own gpu so i have a training code on a ubuntu with all cuda, pytorch, ultralytics,... it's just doing it on this because i don't know how to import my dataset on the pods?
maybe zip the code with the dataset and upload it to google drive or some cloud storage and then download at runpod?
or use S3 / blackblaze etc to transfer files
and then install ultralytics, run the training code on jupyterlab