P
Prisma2mo ago
Shrinjay

Prisma Accelerate using >100 connections when configured to use 10

I have an application configured to use separate prisma accelerate and PostgreSQL clients. The PostgreSQL client connects to a Supavisor connection pooler. This is resulting in me running out of connections. Using select * from pg_stat_activity I see that there are >100 connections from Prisma Accelerate in state "idle" with wait_event "ClientRead". I have the connection limit set to 10 yet it is using 100 - not quite sure how that happens.
24 Replies
Prisma AI Help
Prisma AI Help2mo ago
You're in no rush, so we'll let a dev step in. Enjoy your coffee, or drop into #ask-ai if you get antsy for a second opinion!
Nurul
Nurul2mo ago
Hey @Shrinjay I responded to your ticket. We can continue conversation here if you like. This is happening in environment named "fundly-prd"?
Shrinjay
ShrinjayOP2mo ago
Yes this is sorry forgot to use reply - yes this absolutely is
Shrinjay
ShrinjayOP2mo ago
In case it helps, here's the pg_stat_activity output
Shrinjay
ShrinjayOP2mo ago
and its still climbing, up to 80 now
Nurul
Nurul2mo ago
Thanks for sharing the output. I am taking a look I do see that there are 10 Query Engines associated with your environment, each holding 10 connections would lead you to have 100 connections. I am checking the database to see the connection limit now
Shrinjay
ShrinjayOP2mo ago
how would we have 10 query engines?
Nurul
Nurul2mo ago
I can confirm that the database connection is set to 10
Shrinjay
ShrinjayOP2mo ago
We have two heroku processes, each of which is declaring prismaClient once
Nurul
Nurul2mo ago
No, Query Engine is an internal component responsible for handling incoming queries. Each query engine can hold at max 10 connections
Shrinjay
ShrinjayOP2mo ago
Right, so when does a new one get instantiated?
Nurul
Nurul2mo ago
As you have set connection limit to 10, you should have only 1 Query Engine If you would have set the connection limit to 100, then you would have 10 Query Engines. QEs are spawned dynamically based on incoming traffic. Let me quickly check with our Accelerate team on why your environment has 10 engines
Shrinjay
ShrinjayOP2mo ago
Interesting - so even if I have one prisma client instance, it could instantiate multiple prisma query engines in accelerate? I'm currently up to 126 connections from prisma accelerate
Nurul
Nurul2mo ago
Interesting - so even if I have one prisma client instance, it could instantiate multiple prisma query engines in accelerate?
Yes, correct. I have raised it with our Accelerate team. I'll get back to you shortly
Shrinjay
ShrinjayOP2mo ago
so is the connection limit I set in accelerate on a per-query-engine basis? if I have a connection limit of 20, will prisma spawn 10 query engines depending on load, and then open 200 connections? Or do you just open 1 query engine per 10-connection increment, so if I specify 20 connections, I get 2 query engines opened by accelerate?
Nurul
Nurul2mo ago
Or do you just open 1 query engine per 10-connection increment, so if I specify 20 connections, I get 2 query engines opened by accelerate?
This is correct.
Shrinjay
ShrinjayOP2mo ago
right, so the 10 query engines being opened is being opened by prisma due to load, not by my source code?
Nurul
Nurul2mo ago
Can you try redeploying your app by making a small change to your schema.prisma file? For example: You can just add a comment like this
// This is a post
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
}
// This is a post
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
}
This would cause the schema hash to change and would force QEs to reset. This should ideally fix the issue.
Shrinjay
ShrinjayOP2mo ago
ok - I don't think that sounds like a long term fix though why is accelerate not abiding by the connection limit we set for it. Is this something we need to do every time I change pool settings? I deployed the change observing for the same behaviour again
Nurul
Nurul2mo ago
I'll respond to your queries shortly. The number of connections should gradually come down. I am observing this on our end.
Shrinjay
ShrinjayOP2mo ago
its holding at 4 connections now that's because I restarted our db it was not coming down otherwise the behvaiour was that I would restart the db but then the number of connections would start climbing but it seems to holding at 4 which is good, previously it would rapidly climb to 20 and then 30 and then 80
Nurul
Nurul2mo ago
It should not increase now hopefully. I see that there is only 1 Query Engine now as expected. I am gathering details from our team on what happened, how this could have been prevented and what steps we are gonna take to prevent this.
Shrinjay
ShrinjayOP2mo ago
Ok - thank you - please keep me posted
Nurul
Nurul2mo ago
Okay, so firstly why there were 100 connections: Internally, Prisma Accelerate can spin up multiple Query Engines (QEs) as traffic increases. Each QE can hold up to 10 connections. At some point, you had your limit set to 100, which caused Accelerate to scale up to 10 QEs. Even though you lowered the limit afterward, Accelerate didn't immediately terminate active QEs in order to avoid any disruption to your ongoing production traffic. As a result, you ended up with multiple QEs that each kept 10 connections open—leading to over 100 idle connections. Why Did a Schema Change Reset the QEs: When you push a new schema or upgrade the Prisma version, Accelerate treats this as a “safe point” to provision a fresh connection pool. The old pool will then shut down once the connections are no longer in use. The older QEs are then gradually retired once they finish any ongoing work, which re-aligns your total connections with the new limit. That’s why, after making a small schema change, it triggered a reset that forced Accelerate to apply the correct pool size limit. Steps Are We Taking to Prevent This: Based on this incident, we decided that we should simply reset the scaling history when the connection limit is changed. It won't immediately free the excess connections, though it will reconcile to the new limit preventing issues like this. We had created an issue internally to tackle this. I'll inform you as soon as the issue is shipped. I want to sincerely thank you for your assistance in helping us debug this issue and we sincerely apologise for the trouble🙏

Did you find this page helpful?