R
RunPod•10mo ago
chandhooguy

Clarify RAM available

Hey there! Was thinking of using 4090 pods, and I saw that the deployment only had 60GB of RAM - which seems really low for a machine with 8 4090's. However, in the filter section, it actually states that this is per GPU. It would be great to clarify that in the deployment section as well 🙂 - I'm sure others have been confused as I was.
No description
No description
Solution:
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)
Jump to solution
8 Replies
Jason
Jason•10mo ago
@Zeke Yeah I think it's a little bit confusing, might be total ram.in one pod
chandhooguy
chandhooguyOP•10mo ago
I think that would make more sense
Jason
Jason•10mo ago
Yeah, let's wait for Runpod staff to check on this too later
Aurora
Aurora•10mo ago
I encountered this too. Put the "Use VRAM to filter" to make it 8x 4090 and the mem will be larger
Madiator2011 (Work)
Madiator2011 (Work)•10mo ago
VRAM and RAM are diffrent things
Aurora
Aurora•10mo ago
I mean if you simply modify the GPU count, it will use 1 GPU machine's RAM for the multi-GPU machine. But if I want 4x 4090 and filter 96 GB Video RAM, then it will display more ram (like 300GB)
flash-singh
flash-singh•10mo ago
thats per gpu, it does look confusing, will see how we can make it cleaner
Solution
kaj
kaj•10mo ago
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)

Did you find this page helpful?