Sandeep
Sandeep
RRunPod
Created by Sandeep on 1/22/2025 in #⚡|serverless
what is the best way to access more gpus a100 and h100
flux is of 25 gb , if i download that model in network volume, then i can only access that region gpus only , and I can see everytime , a100 and h100 gpus are in LOW in all the regions . If i download the model flux in the container itself while building the docker image, instead of network volume, then it has to download 25 size of docker image everytime for the new pod could anyone please help me with this
6 replies
RRunPod
Created by Sandeep on 1/16/2025 in #⚡|serverless
how to run flux+lora on 24 GB Gpu through code
I there , could anyone help me how can we inference the flux +lora using 24 GB Gpus Thanks
6 replies