Running 2x H100 80gb. Does this mean my cap is now 160gb of vram?

I'm doing some vfx work on 8k footage. Right now with the 2x H100's I really only get things to work at 2500x2500. I get an error when I pump a 4k image in that says my vram is 80gb. So I'm assuming doing two H100's means it won't combine?
7 Replies
DeluxeWalrus
DeluxeWalrusOP11mo ago
OutOfMemoryError: CUDA out of memory. Tried to allocate 151.88 GiB (GPU 0; 79.11 GiB total capacity; 55.44 GiB already allocated; 22.45 GiB free; 55.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
flash-singh
flash-singh11mo ago
on one gpu its still 80gb, on 2 gpus its 160gb
DeluxeWalrus
DeluxeWalrusOP11mo ago
My pod has two GPUs. Is there a reason the program isn't utilizing both?
flash-singh
flash-singh11mo ago
thats up to the program, you can't use 2 gpus as one, requires software to do that, its not simple as adding more ram
DeluxeWalrus
DeluxeWalrusOP11mo ago
okay thank you for explaining. Sorry I'm pretty new at stable diffusion. Is my only option really to have a lower res output?
flash-singh
flash-singh11mo ago
yeah i think so, i haven't seen of stable diffusion that can use more than 1 gpu to pool vram also use stable diffusion XL
DeluxeWalrus
DeluxeWalrusOP11mo ago
Does that have inpainting? actually that was probably a stupid question since I don't even know what that is haha
Want results from more Discord servers?
Add your server