Running 2x H100 80gb. Does this mean my cap is now 160gb of vram?
I'm doing some vfx work on 8k footage. Right now with the 2x H100's I really only get things to work at 2500x2500. I get an error when I pump a 4k image in that says my vram is 80gb. So I'm assuming doing two H100's means it won't combine?
7 Replies
OutOfMemoryError: CUDA out of memory. Tried to allocate 151.88 GiB (GPU 0; 79.11 GiB total capacity; 55.44 GiB already allocated; 22.45 GiB free; 55.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
on one gpu its still 80gb, on 2 gpus its 160gb
My pod has two GPUs. Is there a reason the program isn't utilizing both?
thats up to the program, you can't use 2 gpus as one, requires software to do that, its not simple as adding more ram
okay thank you for explaining. Sorry I'm pretty new at stable diffusion. Is my only option really to have a lower res output?
yeah i think so, i haven't seen of stable diffusion that can use more than 1 gpu to pool vram
also use stable diffusion XL
Does that have inpainting?
actually that was probably a stupid question since I don't even know what that is haha