random 403 errors on workers outgoing fetch requests

We've been investigating a big issue on our platform for a few weeks. We're integrating with multiple accounting systems and sending a lot of data out to them. With our most popular integration quickbooks, we've been starting to receive a lot of 403 responses, seemingly randomly, since start of may. Randomly means, we perform the request multiple times, 4x it fails with 403, then it just works with 200. We've escalated the issue up to their internal tech team, but they assure us that our requests are not hitting their API. They've checked logs on multiple parts of their infrastructure. This is how a response looks like. They say they don't use cloudflare services, so it must be our outbound request made from a cloudflare worker failing. The server: https://quickbooks.api.intuit.com
{
"body": "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body>\r\n<center><h1>403 Forbidden</h1></center>\r\n</body>\r\n</html>\r\n",
"headers": {
"cf-cache-status": "DYNAMIC",
"cf-ray": "891beeccb61e9d54-DME",
"connection": "keep-alive",
"content-type": "text/html",
"date": "Mon, 10 Jun 2024 19:57:51 GMT",
"server": "cloudflare",
"transfer-encoding": "chunked"
},
"status": 403,
"statusText": "Forbidden"
}
{
"body": "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body>\r\n<center><h1>403 Forbidden</h1></center>\r\n</body>\r\n</html>\r\n",
"headers": {
"cf-cache-status": "DYNAMIC",
"cf-ray": "891beeccb61e9d54-DME",
"connection": "keep-alive",
"content-type": "text/html",
"date": "Mon, 10 Jun 2024 19:57:51 GMT",
"server": "cloudflare",
"transfer-encoding": "chunked"
},
"status": 403,
"statusText": "Forbidden"
}
Are we somehow hitting a limitation of cloudflare workers? Does someone have an advice on how we can further debug this? I've tried looking for the cf-ray in the security events with no avail. I'm pretty lost with this issue.
No description
1 Reply
quambo
quambo5mo ago
It looks like CF DNS is on Amazon Route 53 we use these products - WAF (disabled all rules now) - sentry integration tail worker - opentracing with baselime (added after the issue occured for better debugging)
Want results from more Discord servers?
Add your server