Gunicorn worker timeouts

I have a django service that is ran with gunicorn works. For some reason, my more expensive DB requests are timing out and crashing. The log shows up as:
[2024-10-27 15:11:32 +0000] [30] [CRITICAL] WORKER TIMEOUT (pid:30356)
[2024-10-27 15:11:33 +0000] [30] [ERROR] Worker (pid:30356) was sent SIGKILL! Perhaps out of memory?
[2024-10-27 15:11:34 +0000] [30408] [INFO] Booting worker with pid: 30408
[2024-10-27 15:11:32 +0000] [30] [CRITICAL] WORKER TIMEOUT (pid:30356)
[2024-10-27 15:11:33 +0000] [30] [ERROR] Worker (pid:30356) was sent SIGKILL! Perhaps out of memory?
[2024-10-27 15:11:34 +0000] [30408] [INFO] Booting worker with pid: 30408
I also attached the resource useage. It shouldn't be memory nor vcpu since there is much more left. Unless for some reason gunicorn doesn't recognize that it can tap into more memory for a given worker? Is that possible? I am not 100% sure what might be going on. Then again, it might just be a timeout and not memory related. I still find it strange though that the memory consumption for django + workers is so steady with no change in levels. Idk. project id: 1b326884-0c17-43ed-9b52-f443662e8f50
No description
Solution:
Eh. Honestly changing it to be through the private proxy didn't really change things. But either way, the original problem was a skill issue and so this thread can be marked solved.
Jump to solution
18 Replies
Percy
Percy4w ago
Project ID: 1b326884-0c17-43ed-9b52-f443662e8f50
Joshie
JoshieOP4w ago
hmmmm actually, it might just be timing out and I need to up the timeout limit. That is fine. But I do still wonder about the memory being so steady. And also while the query is expensive, it shouldn't be that expensive. Strange for sure. :hmmmm: Oh well. I shall figure it out Yea, the default timeout is 30s. Idk why but I thought it was 60s. I will try to bump it up and see if that fixes my issue. Likely it will. But that is upsetting that this query is taking that long. Because I look at the logs of postgres, and it has 0 complaints. So it must really be a timeout and not to do with memory. Lol this is such a sad day. Yea, the query is just too slow. :cry_gil: Ok. No issues. Thanks Time to add a loader indicator on the page. smh. Clearly none of the users actually use this query since not a single complaint has come in .. but :shrug:
Brody
Brody4w ago
database on railway and you're connecting to it via the private network, right?
Joshie
JoshieOP4w ago
Yea Wait ... maybe it isn't ???? Hold up I gotta go check something :scared: Alright yea, it was calling PGHOST and not PGPRIVATEHOST. Switiching it now and deploying. Lets see if that makes things better. I guess yea, I have not touched this project in quite a bit of time .... oops
Brody
Brody4w ago
these things happen
Joshie
JoshieOP4w ago
:cry_gil: time to debug
No description
Joshie
JoshieOP4w ago
(also the new old builder is much faster lol. Thanks) Ah. I think it is because the private port is different than the public one. Any reason for this?
Brody
Brody4w ago
The reason for the private port being different from the public port?
Joshie
JoshieOP4w ago
No actually I can intuit the reason But there is no var ref for it? Is this accurate. I can't seem to find one?
Brody
Brody4w ago
exposing 5432 for everyone's database isn't scalable haha
Joshie
JoshieOP4w ago
No RAILWAY_PRIVATE_TCP_PROXY_PORT
Brody
Brody4w ago
RAILWAY_TCP_APPLICATION_PORT
Joshie
JoshieOP4w ago
Ah yep, just found that
Brody
Brody4w ago
as long as you have a TCP proxy on the database, that is what you want to use for the internal port
Joshie
JoshieOP4w ago
Yea. And it does make sense as a name once I know about it. I was just looking for the miror of the other var name with PRIVATE in it.
Brody
Brody4w ago
that's fair
Solution
Joshie
Joshie4w ago
Eh. Honestly changing it to be through the private proxy didn't really change things. But either way, the original problem was a skill issue and so this thread can be marked solved.
Brody
Brody4w ago
sounds good to me
Want results from more Discord servers?
Add your server