How to handle API rate limits accross Workers on the edge?

Hey everyone. We have a tricky case, but I am sure there are also people out there that have figured their own solutions to this. We build a flow builder that runs on cloudflare workes. The flows are really time critical and get triggered from clients frontendside around the word. Within the flows a customer can make api calls to a variety of API endpoints. Each endpoint has their own rate limit. Some have very strict limits that for example only allow 5 API calls per second, and when you overhit the api, you get a 30 second timeout. We also have some API's that when you overhit a lot of times, they kill your access to the API for days. So we need to handle he rate limit globally across all workers destinations. The only solution I can imagine at the moment is use DO/D1 or an other DB to check the current rate limit status, make the call, update the status. This is f*cking expansive both in time and $$$. A read to planetscale costs us 30-200ms, a write about 80-250ms (depending from where, D1 is even slower). We can't lose 200-400ms just to keep track of the rate limit :/ How did/would you handle this?
0 Replies
No replies yetBe the first to reply to this messageJoin
Want results from more Discord servers?
Add your server