Cloudflare Fails to mitigate DDoS Attack, Enabling "Verified Bots" doesn't block deindexing it

For the past years, I believed Cloudflare was a trustworthy company, and have seen it as an example in the field. Highly ethical, always on the edge of innovation, full of competent people, great market coverage. I even started slowly racking up stocks on Cloudflare beginning in 2019 due to this. But since then, I've seen more and more articles about how Cloudflare failed for them. I've shrugged off the employment drama because a company needs to stay profitable and sales in Enterprise B2B is hard and requiring. I've shrugged off the casino complaining of being dropped off Business plan when they were bypassing ISP limits and affecting the Cloudflare IP Trust and generating issues for other customers. Now what should I believe when I myself have my whole site deindexed in a critical period for us (we're now leading in sanctioning the Govt. for a corruption case for mismanagement of EU funds, and Google is how people found about us)? I've got DDoSed (Layer 7) quite heavily, I thought Cloudflare mitigates out of the box, but I was wrong. It doesn't, most of the requests (or at least plenty to overwhelm our backend server) were going through. Not DOS, DDOS. Hundreds, if not thousands of different IPs. We managed to resolve and mitigate the attack with Super Bot Fight Mode which "detects bad bots". We allowed "Trusted Bots", which seems not to cover Google, Bing, Yandex bots via IPv6. I haven't complained of extra usage that was billed on Workers / Pages, it was negligible. I didn't complain of extra usaged on D3, it was negligible, ~1$ per hour and we mitigated it fast with Super Bot Fight Mode. But I am highly skeptical that Google will maintain us on #1 page after we answered 403 on our most important pages for days on end. We know we got deindexed, how google will treat the resolution of the 403s, not sure. Screw cloudflare. We got Cloudflare for stability and for it's reputation. It seems things have changed since we started using them heavily.
12 Replies
Stefatorus
StefatorusOP4mo ago
Bet? Managed rule by cloudflare. Google is a trusted bot on IPv4 not on IPv6 "manage definite bots"
Stefatorus
StefatorusOP4mo ago
No description
Stefatorus
StefatorusOP4mo ago
And this seems to be Bign
No description
Stefatorus
StefatorusOP4mo ago
No description
Stefatorus
StefatorusOP4mo ago
Any "Trusted Bot" is trusted only if it's not IPv6 Because ❤️ Cloudflare And since the internet is slowly transitioning to IPv6, something that Cloudflare was proud to advertise it was also promoting, hey, this happen s, maybe some crawlers will use IPv6 if supported too Tough luck for me
Stefatorus
StefatorusOP4mo ago
Impact of deindexing
No description
Stefatorus
StefatorusOP4mo ago
No description
Stefatorus
StefatorusOP4mo ago
And enjoy our main page deindexed due to 403 That is Google Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.99 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) I've sent you two pictures. The one lacking user agent is Yandex
Stefatorus
StefatorusOP4mo ago
No description
Stefatorus
StefatorusOP4mo ago
Bottom right Ah, gotcha, it blocked our Worker It's cloudflare We have a translation proxy Google -> Translation Proxy - > Cloudflare Base-docs -> Tunnel -> Infra So from what it seems, the managed challenge was triggering for Workers requests jesus True You'd expect it to bypass for their own infra thoguh Yes, the fix I know The issue is it occuring It's on the same zone It's cross-site but on same zone, *.incorpo.ro Wouldn't it be normal for it to go under verified bots? My expectancy when I get a service that is under the same provider is that it will not ban itself, or cause issues due to poor integration with mitigation The scope of Pages / Workers and the current push of Cloudflare is for it to expand the services that Cloudflare provides They left beta. Ok, it's not "verified bots", make a "Allow workers from own zone" and enable by default as is normal and expected Also, it shows source ASN as being from various, and the IP is from Cloudflare And that wierd behaviour triggered the managed bot because it was off (Range wasn't fitting expected value)
Isaac McFadyen
Isaac McFadyen4mo ago
There's actually a good reason this doesn't happen by default (https://github.com/cloudflare/workerd/blob/938fd9d9f183c633f1e47686946f29fe7c638307/src/workerd/io/compatibility-date.capnp#L507) but TLDR: if you're doing fetch(req) in your Worker and someone sends a malicious request, if it bypassed all of your rules by default it would result in SSRF.
Stefatorus
StefatorusOP4mo ago
I read the github issue. I agree, that is an issue. I'll add the flag for it not to have this implementation I agree with that, that in itself is not an issue The issue was that enabling bot protection mode resolved the issue while also breaking Google due to this reflection fetch being blocked as a bad bot So it's not a page rule (eg: 401 for specific route), it's a a managed rule That is fine, but there is a feature for "Verified Bots" What happens is that if you are a verified bot, you access a reverse proxy via Pages / Workers, and the pages forwards that request towards the backend (also Cloudflare proxied via tunnels), it triggers bot detection on your worker So the issue is the reflection mitigation with "global_fetch_strictly_public" Solution I see here is just bypassing that Agreed I personally believe managed rules should not run even with that script, as it creates edge cases, as this one is Google requests URL that is public, but with Bot protection enabled It will be allowed since the IP is in trusted list Workers forwards that. It seems to forward the ASN / Useragent, etc, but not IP On the public endpoint #2 - it detects it with ASN / useragent / etc from original request, but with IP outside of trusted bots list Is it normal then for the ASN to still show as Google / Hetzner / etc? I agree on the last statement, however what I don't understand is how it could lead to IP spoofing The origin filter bypass is sane Well, at that point I would say it's not Cloudflare's fault. The logic with a worker being able to fetch("") resources (eg: Ai agent with internet access) makes sense And it's good for the original flag to disable such access and to route it back on top of the filter It's highly unlikely for an application to allow setting X-Forwarded-For by end users. Thanks for the help in finding about the flag I'll enable it and hopefully it'll resolve the issue There should be some mention on how it's behaving however. I know origin bindings were implemented a while ago but I wanted to be able to bind some routes to other workers and have my translation proxy handle that I believe documentation needs to mention this on the super-bot-fight-mode, that if youi're using a proxy without origin bindings it will break Is Super bot fight mode considered part of WAF? Being conservative by default is not a problem, it's common practice. You'd rather have higher load than lose customers However we'd expect it to detect abnormal patterns, traffic spiked significantly on a single URL, with multiple consecutive requests on the URL by the same IP, etc Some we are able to filter out (eg: We can add rate limits) But they lack granularity unless you go forward with Business / Enterprise I believe I'll resolve this, if we can resolve the issue with the middleware not filtering itself out We know that we can just enable bot-protection-mode and it'll have minor consequences except for managed challenge which users are used to --- Appreciate the fast response, I'll try to resolve it and hopefully next time it'll not break For the future, it might be a good idea not to trust any flag (including ASN) sent by workers, to have homogenous behaviour
Want results from more Discord servers?
Add your server