Closed connections
We have started seeing an increased number of
Error: write CONNECTION_CLOSED <id>.hyperdrive.local:5432
which look like peer timeouts from our DB side in the last couple of weeks (maybe more recently?)9 Replies
Hi!
* Would you mind sharing the Hyperdrive ID?
* Can I get a sense of the scale for "increased number"? 0.1%/1%/10% etc?
* Do you already have application-level retries? If so, do they help?
We do not for this error specifically, but I can add (although I wondered if they would help also 🙂 ). Yeah full error is Error: write CONNECTION_CLOSED b81529fcf5eb8ab530b1148bdf531a6a.hyperdrive.local:5432
probably between 1-10%? We dont do a ton of traffic right now
seeing a couple a day
at least for the last 4 or 5 days
Most recent was at 2025-02-17 19:08:34 UTC connecting from a worker
* Got it. 1-10's too high, I want to take a look at that. We usually see this a couple orders of magnitude lower
* Are you running direct, or over a CF Tunnel?
* We have a track of work scheduled soon to run down some of these. TLDR is that when a connection gets severed while sitting in the pool unused, it may throw this error in some circumstances. We have some mitigations in mind but they'll take a while to land.
* Retries with a clean client connection (i.e.
sql.connect()
) should always help.
* Hyperdrive ID, either here or in DM, would be needed for me to dig into your config specifically.I think the ID is in the error above, running direct, and this is during an active write
That ID is not your Hyperdrive ID, no. It's a generated hostname for use within that worker, doesn't really have meaning outside that instance.
Understood, thanks.
oh sorry
heres the ID: f93a1731a3884ec88580ebd65aaaad42
Thank you. I'll give this a look when I have some spare cycles.
appreciate it!
ill look into retries in the meantime
@iano Ok, I finally had some time to dig into this, apologies for the delay here.
So looking at this, it looks like you're hosting on supabase, and probably going through supavisor?
It looks like most of the errors I'm seeing for you have to do with prepared statements getting mishandled, and Supavisor doesn't support prepared statements.
You have two options for fixes for this.
1. Bypass supavisor, use direct connections to supabase.
2. Switch to a driver that doesn't make such heavy use of prepared statements in its message pattern. Node-postgres doesn't, by and large, so that's probably the one I'd try first.