Thank you!!! ddbb159c-feb2-4ff0-a86c-
Thank you!!! ddbb159c-feb2-4ff0-a86c-022d0d3b336a
19 Replies
Thanks so much, it is my production DB for my product I am building I have had a few hundred reports in the past few minutes. This escalation means a lot to me!
Thanks, I've esclated. I don't have an ETA sadly, but hopefully soon.
I'm not sure if you would be able to answer this but is there a way I can duplicate a D1 db? That way I could at least point to a responsive instance
You could try exporting and importing, but I imagine if your DB is throwing errors, exporting will too
Any updates on this? I noticed others seeing the same in the forums but nothing on the incident status page?
It was ack’d by the team but no updates yet, sorry. If there are others you can point me to with this escalating, I can raise higher.
Honestly if I were you, I’d look to deploy elsewhere if you can. I can’t in good faith recommend D1 for production use today. It’s quite likely this won’t be addressed until Monday with some manual interaction from the team.
Really, that's interesting to note. Thanks! I assumed it would be better than any other provider given the time-travel but you reckon stability is an issue?
Because it's not noted as an "unstable" product afaik
Do you see some time in the future you would reccomend it?
It’s not noted as unstable, nope, but you won’t have to search far in this channel to find constant and regular similar reports. Not to mention regular errors: https://discord.com/channels/595317990191398933/992060581832032316/1300948096964100097
I’m hopeful the team can spend some time improving it to really be production ready, but I don’t know what that timeline will look like sadly.
Sure, I appreciate that. I wish this sentiment was communicated more transparently from the team
It seems like so many features are communicated prod-ready when they aren't max-tested
Similar to how the new node_compat_v2 completely broke people's projects when it was released and announced for use
I do too, and for what it’s worth I’ve raised a bit of a stink about this in some private channels since we see lots of issues like this every single week.
Appreciate it 🙏 keep preaching!
I rely on so much because of the fact that everything is dirt cheap and scales infinitely
But I guess sometimes it can be too good to be true
Hi @Moishi,
D1 is overloaded
errors are separate from D1 errors due to being unavailable or unreachable. A single D1 database is not expected to scale infinitely. Overloaded errors indicate that your throughput exceeds the limit of a single database (we see 500-1000 RPS depending on workload). D1 is designed for a horizontal scale out model with multiple databases as opposed to vertical scalingI don’t think this is the case here Vy. It’s been locked up and inaccessible for over 24 hours now.
This kind of response is honestly pretty disappointing and an ongoing issue with D1. Telling a user “This is how the product works” while their DB is down and inaccessible seriously lacks empathy. I suspect Moishi’s DB needs to be kicked to another metal due to noisy neighbours. This isn’t even the first, second, or third time we’ve seen this recently.
I am trying to confirm if the DB is unreachable. Confirming with eng team but overloaded errors should not be returned if database is unavailable
Maybe they shouldn’t be, but they have for many users now.
Thanks for looking into it.
CUSTESC-46607
if you need itHey James. I'm not able to see what you are describing on our side. The DB linked in the CUSTESC seems to have exhibited a few errors today (morning UTC) but nothing that looks persistent. Our logs are heavily sampled, so there is a possibility that we are missing something.
I unfortunately do not know when @Moishi will be around next, but I suspect they would have mentioned if it was working again. Can you see successful requests or test against it? If it’s still in this broken state and your monitoring isn’t able to show that, it sounds like there’s some serious improvements needed there.
I am seeing successful requests from his account for the majority of the day (note log scale).
Thanks. I guess we’ll have to wait for @Moishi to report back. Those numbers seem a bit high though - are you sure that’s filtering correctly? 🤔
(It’s also usually not recommended to post full internal grafana screenshots like that publicly fyi)