Complexity of scaling D1 in production with current limits? (10GB/db)

I see that D1 has a maximum size of 10GB per database? That seems like it would get pretty complicated to use in production. Sure, you could use one database per table of data you might need, assuming you're only ever going to have 10GB of posts for example. But what if you need more? I guess if you could see in advance that each user might need up to 10 GB you could create one database per user. The current limit is 50,000 databases on paid plan, but what about if you have more than 50,000 users? I guess you can request a limit increase, but they can't guarantee that increase... Just seems quite complicated. I'd like to use Cloudflare if there was a clear solution, but this uncertainty worries me. https://developers.cloudflare.com/d1/platform/limits/ Thanks all
No description
11 Replies
James
James2mo ago
You’re completely right. D1 advocates for horizontal building as you’ve noted, where each user or app would get their own DB. While this is super neat in theory, in practice it’s not how most apps are built, and Cloudflare doesn’t provide tools to make this design super easy. Dynamic bindings don’t really exist outside of using the workers HTTP api and then manually managing your worker and its bindings with your own layer. And once you do hit inevitable limits, there’s not really a solution outside of sharding data somehow even further. All of this results in people needing to think about their database design incredibly early, and how exactly they’re going to build for Cloudflare, instead of just building, and that’s super unfortunate in my opinion. I shared a comment recently that lays out some more D1 limitations too, if this is of interest: https://www.reddit.com/r/CloudFlare/s/I2ssI8ldvW If you want my honest recommendation, go with something more traditional like a Postgres DB hosted with a major cloud like AWS or a provider like Neon. Connecting to it via Workers is so easy nowadays, and you can even benefit from smart query caching and things with Hyperdrive, avoiding all of D1’s limitations entirely.
⚙
OP2mo ago
Wonderful response @James, thank you. I like the idea of using a PostgreSQL database but still using Cloudflare's R1 and KV storage. That would be pretty easy and convenient! I do really hope they get the issues with D1 ironed out, I like SQLite a lot and it's pretty cool that it's being used like this.
James
James2mo ago
Yep that’s very similar to what I do! KV and R2 are great for their respective use-cases, paired with a traditional PG database for everything else 😀
⚙
OP2mo ago
Awesome :) appreciate your help again
1984 Ford Laser
1984 Ford Laser2mo ago
I would have thought the basic suggestion here would be Durable Objects with SQLite backend? Which is essentially D1 without the batteries
James
James2mo ago
SQLite inside Durable Objects does remove some of the limitations - especially around horizontal scalling with dynamic bindings, but the total data limits still don't change. And, it's a very unique way of building out applications. If folks want to build for Cloudflare and build with these limitations and specific requirements in mind, DO SQLite is a great choice, but in my experience if you want to just build, a more traditional DB will work out better.
Chaika
Chaika2mo ago
Durable Object SQlite is still in beta and currently has a limit of 1 GB per DO as well
Dean
Dean2mo ago
we had similar issues, we ended up using D1 by default, if a database starts to come close to 10G it copies to to Turso which allows for larger limits.
Silvan
Silvan2mo ago
Even tho i love turso and planetscale its hard to recommend a non Postgres db on cf atm due to lack of support of hyperdrive. But im sure we are getting there 🙏🏻
Jonathan
Jonathan2mo ago
SQLite in Durable Objects Beta
1984 Ford Laser
1984 Ford Laser2mo ago
Is there an echo is here?

Did you find this page helpful?