Perhaps I'm not being clear enough —
Perhaps I'm not being clear enough — when we were seeing these errors, we were doing perhaps 5 requests in a 5 minute period, which should be safe enough given the 1200 limit?
22 Replies
(Creating a thread to make the discussion easier to track across other messages).
That's helpful. Can you share you namespace ID?
here is when we used the REST API, every time a timestamp appears twice is because it errored and subsequently logged the error
sure thing!
namespace id isn't a secret, right?
Send it via DM
I don't think so, I'll verify, but for now DM should work (ie no one can access your namespace even if they have the id)
sent!
as mentioned before, my hunch is that we got rate limited because of an unlucky IP assigned to us from Heroku after an app restart
that would explain why it disappeared after another restart, and also why this has been running in production, with much heavier load than what we had during this period, for several months without issue
As in an IP address that is being reused by other Heroku dynos/droplets?
I guess? ¯\_(ツ)_/¯
I think with the dynos we have we're sharing IPs
Perhaps, can you share the full log of an error?
Is it error code 429?
429
timestamp in my screenshot is not UTC btw, apologies
first error was at 10:51:31 UTC yesterdayYes, confirming this is rate limiting that is applied at the Cloudflare REST API level (this is not specific to KV). The shared IP hypothesis would make sense. Have you considered spinning up a simple Worker that provides the same REST API as what you would need?
You could bake in some type of secret to ensure only you have access.
thank you for confirming!
are you saying that a Worker does not have this rate limiting?
That's obviously not a great solution, and we're working to remove all products from this general shared REST API and rate limits, but this would be an immediate solution
oh ofc not, just using it directly
haven't used the bulk api from a worker though
ah, it doesn't support it if I understand correctly
I think this is why I ended up using the API in the first place https://developers.cloudflare.com/kv/api/write-key-value-pairs/#write-data-in-bulk
Cloudflare Docs
Write key-value pairs | Cloudflare Workers KV
To create a new key-value pair, or to update the value for a particular key, call the put() method of the KV binding on any KV namespace you have bound to your Worker code:
so what you're suggesting is just doing a single write per key, perhaps throttling so I don't go above the maximum concurrent subrequest limit?
If you can provide more context around what you're trying to do, there might be a more straightforward approach. But I was thinking of creating a Worker that accepts a list of keys (up to 1000), and does 1000 parallel individual .get(), and returns the result
I'm bulk writing 1-20 keys
From Heroku (where you presumably have some way of knowing what the content of those keys) and then reading from a Worker?
I don't think it's necessarily important, but I'm writing keys to KV for all image variants that should be allowed, then I have a Worker that handles image requests, and if a variant is not in R2, it will check KV if it's allowed, and if it is will use Image Resizing to create the variant
I'll create another endpoint in the Worker that handles adding the values to KV as well — at least that way we're pretty much guaranteed not to be rate-limited no matter what load we'll see in the future
I don't really need the bulk writing API since it's just a max of 20 keys
I think that's a good plan
And like I said, long term, we're looking to provide per product instance limits that wouldn't be affected by IP address reusing on Heroku
roger!
I think this was a blessing in disguise either way, as it affected only a very few users, and now you made me aware of something that might've been a bigger issue down the road. it's not unimaginable that we could be hitting the 1200/5min limit in just some months 🙂
thank you for your time and help, hope you have a great weekend Thomas!
Excellent, happy to hear that, you too!