Hi all,

Hi all, Do we need to create a new SQL connection for every request with Hyperdrive and close it after fetching the data, or should we create a single connection instance and reuse it? The latter approach seems more reasonable to me but it doesn't appear to work on Workers.
6 Replies
knickish
knickish6d ago
In general, Hyperdrive expects that you will create the new connection every time. Hyperdrive already maintains a warm pool of connections to your origin database that it will use to service the requests you make from your worker
Ramtin
RamtinOP5d ago
Thank you for your response. Would you recommend closing the SQL connections manually after each request as well? Also could you please advise whether the following configuration for establishing SQL connections to Hyperdrive is appropriate to use across the worker? I’ve seen some examples where max is set to 1 instead of 5.
export function getSql(env: Env) {
return postgres(env.HYPERDRIVE.connectionString, {
max: 5, // Use a small pool to reduce overhead
idle_timeout: 5, // Quickly close idle connections
connect_timeout: 5, // Fail fast if connection can't be established
max_lifetime: 60, // Limit connection lifetime to 1 minute
onnotice: () => {}, // Suppress server notices
onparameter: () => {}, // Suppress parameter change logs
debug: (connection, query, parameters) => {
console.log(`[SQL] ${query}`, parameters)
}
})
}
export function getSql(env: Env) {
return postgres(env.HYPERDRIVE.connectionString, {
max: 5, // Use a small pool to reduce overhead
idle_timeout: 5, // Quickly close idle connections
connect_timeout: 5, // Fail fast if connection can't be established
max_lifetime: 60, // Limit connection lifetime to 1 minute
onnotice: () => {}, // Suppress server notices
onparameter: () => {}, // Suppress parameter change logs
debug: (connection, query, parameters) => {
console.log(`[SQL] ${query}`, parameters)
}
})
}
knickish
knickish5d ago
I think all of the options you've mentioned should be fine. I typically close the sql connections out of habit, but they will be severed once your worker completes regardless. Maybe somebody else knows more about this, but I'm not aware of any issues with having either 1 or >1 max connections
Ramtin
RamtinOP5d ago
Sometimes my SQL queries fail randomly and I get the following error:
PostgresError: Protocol Error: First response to flushed describe was not a ParseComplete.
PostgresError: Protocol Error: First response to flushed describe was not a ParseComplete.
I thought it might be due to not closing the Hyperdrive connection correctly or its configurations but I think it might be a problem with Hyperdrive itself
AJR
AJR4d ago
That's so interesting. Can you share the actual queries you're using? In DM is fine, if you prefer. I was just seeing those errors come in over the weekend and haven't seen them before. I'm also going to try running our test with those settings, to see if one of them is behaving in a way I wouldn't expect If you can get away with it, please also consider fetch_types: false. It'll save you a lot of overhead if you can get away without it. I just ran through a large test suite with those settings and they worked correctly, so I think the next step would be to share Hyperdrive ID and example queries, if you would.
Ramtin
RamtinOP3d ago
I DMed you.

Did you find this page helpful?