Closed connections

We have started seeing an increased number of Error: write CONNECTION_CLOSED <id>.hyperdrive.local:5432 which look like peer timeouts from our DB side in the last couple of weeks (maybe more recently?)
9 Replies
AJR
AJR•3w ago
Hi! * Would you mind sharing the Hyperdrive ID? * Can I get a sense of the scale for "increased number"? 0.1%/1%/10% etc? * Do you already have application-level retries? If so, do they help?
iano
ianoOP•3w ago
We do not for this error specifically, but I can add (although I wondered if they would help also 🙂 ). Yeah full error is Error: write CONNECTION_CLOSED b81529fcf5eb8ab530b1148bdf531a6a.hyperdrive.local:5432 probably between 1-10%? We dont do a ton of traffic right now seeing a couple a day at least for the last 4 or 5 days Most recent was at 2025-02-17 19:08:34 UTC connecting from a worker
AJR
AJR•3w ago
* Got it. 1-10's too high, I want to take a look at that. We usually see this a couple orders of magnitude lower * Are you running direct, or over a CF Tunnel? * We have a track of work scheduled soon to run down some of these. TLDR is that when a connection gets severed while sitting in the pool unused, it may throw this error in some circumstances. We have some mitigations in mind but they'll take a while to land. * Retries with a clean client connection (i.e. sql.connect()) should always help. * Hyperdrive ID, either here or in DM, would be needed for me to dig into your config specifically.
iano
ianoOP•3w ago
I think the ID is in the error above, running direct, and this is during an active write
AJR
AJR•3w ago
That ID is not your Hyperdrive ID, no. It's a generated hostname for use within that worker, doesn't really have meaning outside that instance. Understood, thanks.
iano
ianoOP•3w ago
oh sorry heres the ID: f93a1731a3884ec88580ebd65aaaad42
AJR
AJR•3w ago
Thank you. I'll give this a look when I have some spare cycles.
iano
ianoOP•3w ago
appreciate it! ill look into retries in the meantime
AJR
AJR•2w ago
@iano Ok, I finally had some time to dig into this, apologies for the delay here. So looking at this, it looks like you're hosting on supabase, and probably going through supavisor? It looks like most of the errors I'm seeing for you have to do with prepared statements getting mishandled, and Supavisor doesn't support prepared statements. You have two options for fixes for this. 1. Bypass supavisor, use direct connections to supabase. 2. Switch to a driver that doesn't make such heavy use of prepared statements in its message pattern. Node-postgres doesn't, by and large, so that's probably the one I'd try first.

Did you find this page helpful?