sam
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
is there a particular experiment you think might be helpful for us to run to give you more information?
maybe switching to have the tail worker POST to our production worker, and enqueue to GCP there? or avoid using tailworkers altogether, just doing our telemetry pushing from inside the main worker, in a ctx.waitUntil?
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
As for analytics, it definitely shouldn't taper over such a periodyeah the 2h taper must have been ingestion delay on cloudflare's end, since it did not line up with our own request counts, and eventually caught up to reality today i do not see the same taper on 6h view -- but the spiky ratelimiting pattern for requests vs subrequests in that view still remains
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
np!
yeah the response body is empty unfortunately, no body or headers
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
(also just to be super clear, i dont expect y'all to respond right away to these. its a community support forum after all((: ❤️ just posting stuff as i was investigating, not to instigate a response)
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits

33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits

33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
the best i can tell, no they're not -- all the gcp quota dashboards for this operation show us at like <0.1% of the maximum throughput
the tail worker is only doing like ~500 rps to that endpoint, while our prod worker on the openrouter.ai domain does 2-3x that without seeing any 429s
---
edit: and yeah just to confirm, adding a fetch handler and assigning a route on our zone to the tail worker did not help, as you expected. i let it sit for 15-20m, but reverted that now
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
thanks! it appears the issue is continuing unfortunately, but ill check back in a few min
only thought was maybe because this tail worker is not in that zone? would it be in the
workers.dev
zone if it doesnt have a fetch handler/any routes attached?33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
and in case it makes a difference: our primary worker has routes assigned to it under that openrouter.ai domain, however the tail worker is a different worker that has no fetch handler / no routes/bindings to the openrouter.ai domain
ill go beef up my understanding of zones in the meantime
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
think this would be it:
8615a0f8e442523371429211d0a5120e
(found by going to "Account Home" > openrouter.ai domain > "Zone ID" on the side bar)33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
oh my b i thought zone was synonymous with org -- one sec, i'll go look again
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
ah awesome! my org's account id is
056879e63aa83db17aadc76220f52953
out of curiousity, does this burst limit apply equally to both normal workers & tail workers? or varies somehow depending on steady-state throughput for each deployment? (iirc tail handlers were still in beta, so idk if they're nerfed in some way for now)
thanks again for taking the time here y'all ❤️33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
hmm interesting. that is good to know regardless!
though, I'm not sure that would necessarily explain how the act of avoiding initial requests to Datadog would cause the requests to Google to stop being rate limited 🤔
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits

33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
I am led to believe its cloudflare thats forcing these 429's through some sort of burst ratelimit / anti-abuse limit, and the requests never actually make it to google, but I'm not 100% certain
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
---
apologies for the wall of text haha(: just wanted to make sure the full picture was clear, given how odd some of this feels
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
I have seen this doc: https://developers.cloudflare.com/workers/platform/limits/#request, Which is what prompted me to start this line of thinking, since there are scenarios where cloudflare would be injecting these 429s artificially.
Unfortunately, I did not see any of the mentioned events logged in Security>Events. But given that the act of switching which endpoint we tried to hit first (DD vs Google) caused a change in observed 429 behavior, it seems there must be some rules/limits that I'm not yet understanding
33 replies
CDCloudflare Developers
•Created by rohin on 1/16/2025 in #workers-observability
Tail Workers & Outgoing Request Ratelimits
Given what we've observed, this issue seems specific to the tail worker as other operations using the same credentials and endpoints do not face these limits in our normal worker.
Could you help us understand why this might be happening and if there's something specific in the tail worker runtime causing these rate limits?
33 replies