Closed connections
We have started seeing an increased number of
Error: write CONNECTION_CLOSED <id>.hyperdrive.local:5432
which look like peer timeouts from our DB side in the last couple of weeks (maybe more recently?)19 Replies
Hi!
* Would you mind sharing the Hyperdrive ID?
* Can I get a sense of the scale for "increased number"? 0.1%/1%/10% etc?
* Do you already have application-level retries? If so, do they help?
We do not for this error specifically, but I can add (although I wondered if they would help also 🙂 ). Yeah full error is Error: write CONNECTION_CLOSED b81529fcf5eb8ab530b1148bdf531a6a.hyperdrive.local:5432
probably between 1-10%? We dont do a ton of traffic right now
seeing a couple a day
at least for the last 4 or 5 days
Most recent was at 2025-02-17 19:08:34 UTC connecting from a worker
* Got it. 1-10's too high, I want to take a look at that. We usually see this a couple orders of magnitude lower
* Are you running direct, or over a CF Tunnel?
* We have a track of work scheduled soon to run down some of these. TLDR is that when a connection gets severed while sitting in the pool unused, it may throw this error in some circumstances. We have some mitigations in mind but they'll take a while to land.
* Retries with a clean client connection (i.e.
sql.connect()
) should always help.
* Hyperdrive ID, either here or in DM, would be needed for me to dig into your config specifically.I think the ID is in the error above, running direct, and this is during an active write
That ID is not your Hyperdrive ID, no. It's a generated hostname for use within that worker, doesn't really have meaning outside that instance.
Understood, thanks.
oh sorry
heres the ID: f93a1731a3884ec88580ebd65aaaad42
Thank you. I'll give this a look when I have some spare cycles.
appreciate it!
ill look into retries in the meantime
@iano Ok, I finally had some time to dig into this, apologies for the delay here.
So looking at this, it looks like you're hosting on supabase, and probably going through supavisor?
It looks like most of the errors I'm seeing for you have to do with prepared statements getting mishandled, and Supavisor doesn't support prepared statements.
You have two options for fixes for this.
1. Bypass supavisor, use direct connections to supabase.
2. Switch to a driver that doesn't make such heavy use of prepared statements in its message pattern. Node-postgres doesn't, by and large, so that's probably the one I'd try first.
I think we are using direct connections, but also prepared statements are disabled in our postgres.js config I believe
Or at least we pass prepare: false when initializing
we are using supabase though
prepare: falseAh, this is probably not helping you. I would remove this from your connection settings if possible. postgres.js doesn't actually avoid prepared statements with this setting, it just uses the "unnamed" prepared statement, but in ways that are a bit questionable.
ohhhhhhhh
wowwww
Do yall generally suggest node-postgres with hyperdrive? I kind of thought postgres.js was the suggested library
It is, but there are settings on it that definitely aren't always appropriate for use on Workers.
fetch_types
is another one that doesn't work great, and it's defaulted to true
. We explicitly mention that one in our docs, at least. The situation with supavisor is a bit unfortunate because it really knocks postgres.js out of consideration. If you're stuck working with it, though, node-postgres is a really good backup option for a driver.ok ill audit and make sure we are connecting directly
yeah they are all connecting directly, I will turn stop setting prepare: false. We already have types: false. Anything else we can do? Still seeing these errors unfortunately
specifically we are using port 5432 https://supabase.com/docs/guides/database/connecting-to-postgres
Connect to your database | Supabase Docs
Connect to Postgres from your frontend, backend, or serverless environment
Unfortunately, these look like occasional disconnects from your origin's side of the connections. There's some additional retry behavior we're looking to add, such that if it's safe to retry a query and a connection was severed while waiting in the pool, we can just re-send it instead of returning an error. That's still on our roadmap though, due to the risk of accidentally sending the wrong queries multiple times.
In the meantime, I'd say see if another hosting provider offers you more stability, or add in retry logic to your application. Ultimately we won't ever be able to safely retry INSERT/UPDATE queries anyway, so it's worth doing as a general principle.
In the meantime, I'd say see if another hosting provider offers you more stability, or add in retry logic to your application. Ultimately we won't ever be able to safely retry INSERT/UPDATE queries anyway, so it's worth doing as a general principle.
Any chance you can see anything I can bring to supabase to better describe the problem?
Hmmmm. I do see you're in a very, very busy data center. Let me move you a bit, and see if that helps stability too.
Hang on, I can get you exact UTC timestamps. That'll help with their debugging. Ok, I've moved you to another datacenter to see if that helps general stability. Routing to origin can sometimes be improved that way, please let me know if it helps. To debug on the supabase side, here are a handful of the exact timestamps your connections were severed:
Hang on, I can get you exact UTC timestamps. That'll help with their debugging. Ok, I've moved you to another datacenter to see if that helps general stability. Routing to origin can sometimes be improved that way, please let me know if it helps. To debug on the supabase side, here are a handful of the exact timestamps your connections were severed:
thank yyou!
ill ask them, appreciate all of the help