Too many subrequests with Cloudflare Pages
To my knowledge, with the new Standard usage, there is no more limitation on the number of subrequests. So why I am having an error "Too many subrequests" for some of my Pages functions?
16 Replies
there's still a limit, you're looking at it right there..?
Ok my bad. I misread the documentation. For Standard, there is a limit of 1000 per request for paid plan
Yes, indeed, was going to say it
All standard did is mash together Bundled and Unbound into one tier with 30s max of cpu time billable (instead of 50ms bundled got, and instead of Unbound billing on duration), and the other limits inherited for most were Unbound
This thread can be marked as solved 👍 (I can't find how to do it myself)
ah ok, are you saying you don't think you're hitting it then? May be worth mentioning there is a limit of 6 subrequests concurrently per request as well
I'm hitting it in production. However when I testing locally with wrangler, I don't have any such limit. Do you have an idea why?
It's not enforced in local wrangler, most of the limits aren't, including cpu time
The thing that's weird is that I'm pretty sure I do less than 50 subrequests. When I log them I only see 36. But on Cloudflare I get the error Too many subrequests. How could I debug further or understand why I hit this limit?
May be worth mentioning there is a limit of 6 subrequests concurrently per request as wellAre you on Standard, or free plan? Could you be doing too many concurrently per request?
I am on the free plan. And yes I'm doing a lot of requests in parallel. Cloudflare would throttle or just fail?
Docs are here: https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections
the key part is this:
If the system detects that a Worker is deadlocked on open connections — for example, if the Worker has pending connection attempts but has no in-progress reads or writes on the connections that it already has open — then the least-recently-used open connection will be canceled to unblock the Worker. If the Worker later attempts to use a canceled connection, an exception will be thrown. These exceptions should rarely occur in practice, though, since it is uncommon for a Worker to open a connection that it does not have an immediate use for.ex: If you're awaiting on a ton of them, it detects the dead lock and starts killing
Okay but I should not get an error "Too many subrequests" no?
I should NOT get *
I don't believe from just too many concurrent requests yea, you'd get other errors from the requests failing
Do you have an idea how could I debug further? And understand why I get "Too many requests" error?
There's third party libraries like otel which could potentially help but do the same thing as you could do, which is log each request and see.
May be worth noting as well that workers fetch will automatically follow requests which will count towards your subrequest limit as well, make sure you're using
https://
and not being redirected
you could also change the redirect behavior to manual if you think that could be the case: https://developers.cloudflare.com/workers/runtime-apis/request/#propertiesThanks! Investigating and it seems related to some dependencies that I've upgraded