R
RunPod•4w ago
Harish

Clarify RAM available

Hey there! Was thinking of using 4090 pods, and I saw that the deployment only had 60GB of RAM - which seems really low for a machine with 8 4090's. However, in the filter section, it actually states that this is per GPU. It would be great to clarify that in the deployment section as well 🙂 - I'm sure others have been confused as I was.
No description
No description
Solution:
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)
Jump to solution
8 Replies
nerdylive
nerdylive•4w ago
@Zeke Yeah I think it's a little bit confusing, might be total ram.in one pod
Harish
Harish•4w ago
I think that would make more sense
nerdylive
nerdylive•4w ago
Yeah, let's wait for Runpod staff to check on this too later
Aurora
Aurora•3w ago
I encountered this too. Put the "Use VRAM to filter" to make it 8x 4090 and the mem will be larger
Madiator2011 (Work)
Madiator2011 (Work)•3w ago
VRAM and RAM are diffrent things
Aurora
Aurora•3w ago
I mean if you simply modify the GPU count, it will use 1 GPU machine's RAM for the multi-GPU machine. But if I want 4x 4090 and filter 96 GB Video RAM, then it will display more ram (like 300GB)
flash-singh
flash-singh•3w ago
thats per gpu, it does look confusing, will see how we can make it cleaner
Solution
kaj
kaj•3w ago
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)