I put together a very simple example repo showing a Worker + Durable object + Hyperdrive (Postgres)
I put together a very simple example repo showing a Worker + Durable object + Hyperdrive (Postgres) integration
https://github.com/shadrach-tayo/cloudflare-worker-example
I just recently migrated our Automerge CRDT sync server architecture to use Partykit + Cloudflare workers Durable Object + Hyperdrive (Postgres) with tunnel deployed in our kubernetes cluster.
Happy hacking weekend guys 🎉
GitHub
GitHub - shadrach-tayo/cloudflare-worker-example: An example reposi...
An example repository showcasing how to implement a Cloudflare worker + DurableObjects binding + Hyperdrive (Postgres) integration - shadrach-tayo/cloudflare-worker-example
11 Replies
Hey @AJR I'm facing the same issue.
Let me pass that along to the team. Thanks for letting me know.
Team checked and things seem to be working right now. We'll have someone reach out to you for details to try to reproduce, if you don't mind.
Hi folks. If you're encountering issues when trying to develop locally with Wrangler, please downgrade to 3.85.0. This is a known issue and a fix will be released soon.
https://www.cloudflarestatus.com/incidents/r6ltkr1t0vkr
Unknown User•4w ago
Message Not Public
Sign In & Join Server To View
This fix has been released in Wrangler version 3.92.0. Please make sure to upgrade if you run into errors with connectivity.
This new version brings back support for localConnectionString for local development with Hyperdrive, with support for databases that do not require SSL.
Support for remote databases (which typically require SSL connectivity) and databases that require SSL more broadly is something we'll continue to work to bring to the Hyperdrive CLI.
Hey team, what's the best way of getting more context about these errors ?
Is there an API endpoint I can use to retrieve the error status and messages ?
Do I have to manually log the error on my worker ?
Is there any way to use hyperdrive without SSL?
I tried to enable SSL on my postgres database and for some reason it caused our infrastructure of janky old windows POS machines to fall over
Hey team, I'm seeing an elevated amount of
Error: Connection terminated unexpectedly
in our setup (hyperdrive enabled). We did ship some changes recently that increased some calls within a waitUntil
function. Could that code be the reason why we are seeing more of these errors?It's going to depend mostly on where you host your database and where your users are. Plus whether you cache or not. In general, Hyperdrive itself will add very minimal overhead, single-digit milliseconds generally. Latency for us is about network travel more than anything else.
Except for cache hits, those are stupid fast (and going to get way way faster :soonsoontm: ). All that said, I think many folks see less than 80ms latencies, so I'd encourage you try it out for yourself. Many users point Hyperdrive at Supabase with good success, if you do end up going with them. Yes. The supabase-js client works well in a serverless environment like Workers, but can't take advantage of CF internals the way Hyperdrive can. So often folks find that postgres.js over Hyperdrive to Supabase ends up being a good fit for them. Not that it makes a bunch of difference for your use case, but Hyperdrive is actually all Unix Sockets and RPC under the hood, at least until the queries egress back off our network. Much less overhead than HTTP, yeah.
Except for cache hits, those are stupid fast (and going to get way way faster :soonsoontm: ). All that said, I think many folks see less than 80ms latencies, so I'd encourage you try it out for yourself. Many users point Hyperdrive at Supabase with good success, if you do end up going with them. Yes. The supabase-js client works well in a serverless environment like Workers, but can't take advantage of CF internals the way Hyperdrive can. So often folks find that postgres.js over Hyperdrive to Supabase ends up being a good fit for them. Not that it makes a bunch of difference for your use case, but Hyperdrive is actually all Unix Sockets and RPC under the hood, at least until the queries egress back off our network. Much less overhead than HTTP, yeah.
Also the issue with Atlas wasn't HTTP but the fact that MongoDB Atlas actually ran on AWS Lambda, and their cold-start handling was suboptimal so you'd constantly get Lambda cold-starts :NotLikeThis:
HTTP works fine for DB queries but yeah a connection-oriented protocol is going to be better unless you have a reason to use HTTP
I'd also add that normally HTTP/1.1 and HTTP/2 has TCP Keep Alive which doesn't really exist in Workers because a Worker might be triggered on a different machine at any time and they can't really keep a connection alive the way you could from e.g. a Docker container
So you pay for the TCP (+ maybe TLS) handshake every time for HTTP too
@PatrickJ worth also trying out Hyperdrive + Supabase's Postgres db as well, some customers have done so with good success. You would need to use Hyperdrive as your pooler instead of Supabase's pooler, but that would ensure you have an edge-aware pooler with caching included