Reusing DB instance with process.env
Hi everyone, now that accessing process.env is possible, when I create a db file with a connection to the db:
I get this error:
Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request’s handler. This is a limitation of Cloudflare Workers which allows us to improve overall performance.What's the explanation here? I thought that since every request is different, the module would be initiated per request.
12 Replies
You should be building the DB client on request, rather than exporting it
So like,
But that means that I have to pass it around or calling the func is enough?
Thanks
Yeah, when you need a DB connection, call the function, don't pass the connection around
Thanks a ton. Im wondering, do you do something under the hood with the connection? Or should I memoize it?
You should be creating it anew on every request. You can also use https://developers.cloudflare.com/hyperdrive/, which handles connection-pooling and caching for you
Mmm yes, Im usingn it. Maybe I didn’t explain myself well. I want to create the connection once and then import the db in any place as clean as possible without having to pass the created instance all over the places. im not sure this solves that without caching the output of the function
You probably want AsyncLocalStorage then. You would build the database connection when you start processing, and then the DB connection is automatically passed down on import
Cloudflare Docs
AsyncLocalStorage · Cloudflare Workers docs
Cloudflare Workers provides an implementation of a subset of the Node.js AsyncLocalStorage API for creating in-memory stores that remain coherent through asynchronous operations.
Thats better, thank you
Do you mind explaining to me why this doesn't work?
Where
initDb
is called in a middleware, before anything imports the db. Why the console yields:
Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request’s handler. This is a limitation of Cloudflare Workers which allows us to improve overall performance.From my POV, I'm building the db client once in the middleware and then importing it where its used. This would make working with workers much better. I mean, it works but sometimes it breaks when two requests to the server are done almost at the same time. I thought that a worker is a traditional serverless instance, where the files are initiated by request. So, the first request runs middleware and initiates the db for that specific worker, then different files imports it and the next requests do the same thing, like isolated code. req -> middleware creates db connection -> file imports it -> res | -> req -> middleware creates db connection -> file imports it -> res | same
Because the
db
variable is shared across requests
It should be inside the functionBut that means memory is shared across requests?
No, it means the Drizzle Instance and DB Connection are rebuilt for each request, and not reused
Cloudflare Workers are designed to be super fast and lightweight. So when possible, they reuse the same isolate (kind of like a lightweight V8 sandbox) between requests. That means top-level variables can persist between invocations, but you should not rely on them for request-specific logic or state, especially for I/O objects. 🧠 What’s Really Going On in Cloudflare Workers When you do something like: let db; export function initDb() { db = drizzle(...); } You’re creating a shared global variable. And while it may seem isolated per request in theory, Cloudflare does reuse the execution context between requests when it can.Okay, is this is what you mean I get it now 😄