R
Railwayβ€’8mo ago
sebastiantf

Backend API response is slow even though not using max limits

Railway metrics says we have a max of 8 vCPUs. The max we're using is only 2.6 vCPUs. But we're still seeing degraded API performance
20 Replies
Percy
Percyβ€’8mo ago
Project ID: 3b58a837-a508-4969-a318-7c37050d177e
sebastiantf
sebastiantfOPβ€’8mo ago
3b58a837-a508-4969-a318-7c37050d177e
ThallesComH
ThallesComHβ€’8mo ago
is your app a nodejs app?
sebastiantf
sebastiantfOPβ€’8mo ago
yes
ThallesComH
ThallesComHβ€’8mo ago
then there's your answer, nodejs is single threaded so most of the time it can't reach max CPU utilization you should take a look at replicas
sebastiantf
sebastiantfOPβ€’8mo ago
there is an endpoint that has a lot of external network calls. i tried disabling that and that seems to have fixed it for now.
sebastiantf
sebastiantfOPβ€’8mo ago
i am assuming the degradation was because there were several requests to that endpoint that were still pending so the newer ones were delayed to get response i tried using throng: https://www.npmjs.com/package/throng with a concurrency worker count of 4, but didnt help
ThallesComH
ThallesComHβ€’8mo ago
yeah, if you want to handle those requests you'll need to scale because nodejs won't magically work just by throwing more resources at it my guess is that your worker count is too low and you're doing a lot of resource intensive tasks
ThallesComH
ThallesComHβ€’8mo ago
if you need something more advanced, see https://github.com/taskforcesh/bullmq
GitHub
GitHub - taskforcesh/bullmq: BullMQ - Message Queue and Batch proce...
BullMQ - Message Queue and Batch processing for NodeJS and Python based on Redis - taskforcesh/bullmq
ThallesComH
ThallesComHβ€’8mo ago
then you can have multiple instances of your nodejs app processing a queue
sebastiantf
sebastiantfOPβ€’8mo ago
yeah we use bullmq for something else. we just had huge traffic last few hrs that was not anticipated will try out replicas. how does affect billing?
ThallesComH
ThallesComHβ€’8mo ago
well get your current service usage times your amount of replicas replicas is just another instance of your application running
sebastiantf
sebastiantfOPβ€’8mo ago
how would multiple throng workers vs. replicas differ? does all the replicas share the same 8 vCPUs or do they each get 8 vCPUs?
ThallesComH
ThallesComHβ€’8mo ago
replicas is the way if you're planning on scaling more than 8vCPUs (or buy the pro plan for 32vCPUs) throng will only take you to the max of one service resources (8vCPUS or 32vCPUs) each
sebastiantf
sebastiantfOPβ€’8mo ago
got it. thanks! i suppose using more throng workers might help use the max of all resources currently available
ThallesComH
ThallesComHβ€’8mo ago
yeah, for now it might fix but you might want to take a look at replicas soon or later
sebastiantf
sebastiantfOPβ€’8mo ago
gonna try that out first as the max being used is just 2.5 vCPUs without throng and around 5 vCPUs with 4 throng workers yup will do
ThallesComH
ThallesComHβ€’8mo ago
yeah launch 7 throng workers and see how it goes
sebastiantf
sebastiantfOPβ€’8mo ago
got it. thank you so much for the support! πŸ™
Want results from more Discord servers?
Add your server