Clarify RAM available
Hey there! Was thinking of using 4090 pods, and I saw that the deployment only had 60GB of RAM - which seems really low for a machine with 8 4090's.
However, in the filter section, it actually states that this is per GPU. It would be great to clarify that in the deployment section as well 🙂 - I'm sure others have been confused as I was.
Solution:Jump to solution
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)
8 Replies
@Zeke
Yeah I think it's a little bit confusing, might be total ram.in one pod
I think that would make more sense
Yeah, let's wait for Runpod staff to check on this too later
I encountered this too. Put the "Use VRAM to filter" to make it 8x 4090 and the mem will be larger
VRAM and RAM are diffrent things
I mean if you simply modify the GPU count, it will use 1 GPU machine's RAM for the multi-GPU machine. But if I want 4x 4090 and filter 96 GB Video RAM, then it will display more ram (like 300GB)
thats per gpu, it does look confusing, will see how we can make it cleaner
Solution
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)