R
RunPod•6mo ago
Harish

Clarify RAM available

Hey there! Was thinking of using 4090 pods, and I saw that the deployment only had 60GB of RAM - which seems really low for a machine with 8 4090's. However, in the filter section, it actually states that this is per GPU. It would be great to clarify that in the deployment section as well 🙂 - I'm sure others have been confused as I was.
No description
No description
Solution:
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)
Jump to solution
8 Replies
nerdylive
nerdylive•6mo ago
@Zeke Yeah I think it's a little bit confusing, might be total ram.in one pod
Harish
HarishOP•6mo ago
I think that would make more sense
nerdylive
nerdylive•6mo ago
Yeah, let's wait for Runpod staff to check on this too later
Aurora
Aurora•6mo ago
I encountered this too. Put the "Use VRAM to filter" to make it 8x 4090 and the mem will be larger
Madiator2011 (Work)
Madiator2011 (Work)•6mo ago
VRAM and RAM are diffrent things
Aurora
Aurora•6mo ago
I mean if you simply modify the GPU count, it will use 1 GPU machine's RAM for the multi-GPU machine. But if I want 4x 4090 and filter 96 GB Video RAM, then it will display more ram (like 300GB)
flash-singh
flash-singh•6mo ago
thats per gpu, it does look confusing, will see how we can make it cleaner
Solution
kaj
kaj•6mo ago
this should have been already fixed. Are you still having issues? It seems to work fine on my end (https://karalite.kaj.rocks/chrome_vCVXLLL3aN.mp4)
Want results from more Discord servers?
Add your server