R
Railway15mo ago
fedev

Big memory usage

I have a nodejs app running some tasks and it seems like the ram is increadibly high in the railway charts. I tested it locally and I did not get this behaviour. I also added logs about heap usage every 2 seconds (using process.heapUsed) and they stayed constantly between 0.7 and 1.1 GB, while railway charts rapidly growed up to 4GB of usage (reaching also 22GB in another deployment) before I stopped it.
No description
No description
16 Replies
Percy
Percy15mo ago
Project ID: 4900bb8f-ad4a-4e22-bbe9-115063046480
fedev
fedevOP15mo ago
4900bb8f-ad4a-4e22-bbe9-115063046480
Faolain
Faolain15mo ago
I'm not part of the railway team but I had a similar issue on a Python app that the railway team helped diagnose. Do you happen to be reading the max about of CPUs anywhere in your code to spawn workers? In my case that was happening and even on the hobby plan it made an app that normally takes 300mb take up 3gb of ram. Realized I needed to be explicit for the workers spawning. The deploy logs could possibly help to narrow down what could be causing it.
fedev
fedevOP15mo ago
ok thanks, how do I do that
Brody
Brody15mo ago
who from the railway team helped you diagnose that because neither me or coffee work for railway lol fedev, with the wonderful information faolain has provided please try to do research around the topic yourself too
Faolain
Faolain15mo ago
I assumed you worked at railway?
Brody
Brody15mo ago
nope
Faolain
Faolain15mo ago
You can tap the deploy logs tab on the UI when you click into the service that's using all the memory
Brody
Brody15mo ago
No description
Faolain
Faolain15mo ago
So same thing hehe, maybe it would be good @Brody if this was on the Railway documentation somewhere? since it seems like a lot of people come across
Brody
Brody15mo ago
yes it would be good, where would it go though
Faolain
Faolain15mo ago
maybe under Troubleshoot in a new section or under Fixing Common Errors https://docs.railway.app/troubleshoot/fixing-common-errors maybe a section of big memory usage, showing the steps from above to triage what could be causing it (checking deploy logs, if your code reads the max default workers)
Brody
Brody15mo ago
well ideally, it just wouldnt be a problem lol
Faolain
Faolain15mo ago
agreed I think you mentioned you escalated to the railway dev team right?
Brody
Brody15mo ago
yep they know it looks like this can be solved with passing --cpuset-cpus="0-7" to the docker run command, of course this isnt something a user can do
Want results from more Discord servers?
Add your server