How not to exceed the max connections limit in the DB in a Serverless env like AWS lambda?

[question] What is the recommended way to use drizzle in a serverless environment like AWS lambda - that scales infinitely, in a way we do not exceed the max db connection limits and throttle/crash the DB? context: AWS lambda creates a new runtime instance with every concurrent request when there aren't any free instances to serve the request. And since lambdas can scale quickly, this leads to situations where each spawned instance creates a new connection to the DB if drizzle is initialized outside the handler function as recommended in the docs for connection reuse as shown below.
const databaseConnection = ...;
const db = drizzle(databaseConnection);
const prepared = db.select().from(...).prepare();
// AWS handler
export const handler = async (event: APIGatewayProxyEvent) => {
return prepared.execute();
}
const databaseConnection = ...;
const db = drizzle(databaseConnection);
const prepared = db.select().from(...).prepare();
// AWS handler
export const handler = async (event: APIGatewayProxyEvent) => {
return prepared.execute();
}
Question: If we move the connection creation and drizzle initialization steps inside the handler and close it after use each time, would it slow down and significantly increase the response time of each request as the connection and prepared statements are no longer cached?
Would using AWS RDS proxy help in any way? Are there other strategies to avoid this problem in a serverless environment?
2 Replies
pandareaper
pandareaper9mo ago
If we move the connection creation and drizzle initialization steps inside the handler and close it after use each time
This won't help your problem, it's best practice to keep these connections open since Lambda will keep your function warm between invocations, so don't do this.
Would using AWS RDS proxy help in any way?
RDS Proxy will most definitely help here, it's greatest strength is the ability to increase the number of connections your app can consume while constraining it to what the DB can actually provide. Comes at a cost of course
Are there other strategies
If cost is a concern and you just want to prevent your DB from falling over then reserved concurrency for your function might be an easy mitigation, with the downside of constraining how much you can scale
Joseph Justus
Joseph JustusOP9mo ago
Cost is not a concern. But I am not convinced RDS proxy will help here especially since it "pins" the connection. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy-managing.html#rds-proxy-pinning So when pinning happens, each warm lambda still holds on to its underlying connection. So when the lambdas scale and there are more active lambdas than there are connections in the proxy pool, the app would still be throttled. Thoughts?
Managing an RDS Proxy - Amazon Relational Database Service
Learn how to modify RDS Proxy and tune it to suit your needs.
Want results from more Discord servers?
Add your server