Most active commenters
  • nijave(4)

←back to thread

1343 points Hold-And-Modify | 11 comments | | HN request time: 0.628s | source | bottom

Hello.

Cloudflare's Browser Intergrity Check/Verification/Challenge feature used by many websites, is denying access to users of non-mainstream browsers like Pale Moon.

Users reports began on January 31:

https://forum.palemoon.org/viewtopic.php?f=3&t=32045

This situation occurs at least once a year, and there is no easy way to contact Cloudflare. Their "Submit feedback" tool yields no results. A Cloudflare Community topic was flagged as "spam" by members of that community and was promptly locked with no real solution, and no official response from Cloudflare:

https://community.cloudflare.com/t/access-denied-to-pale-moo...

Partial list of other browsers that are being denied access:

Falkon, SeaMonkey, IceCat, Basilisk.

Hacker News 2022 post about the same issue, which brought attention and had Cloudflare quickly patching the issue:

https://news.ycombinator.com/item?id=31317886

A Cloudflare product manager declared back then: "...we do not want to be in the business of saying one browser is more legitimate than another."

As of now, there is no official response from Cloudflare. Internet access is still denied by their tool.

Show context
ai-christianson ◴[] No.42954365[source]
How many of you all are running bare metal hooked right up to the internet? Is DDoS or any of that actually a super common problem?

I know it happens, but also I've run plenty of servers hooked directly to the internet (with standard *nix security precautions and hosting provider DDoS protection) and haven't had it actually be an issue.

So why run absolutely everything through Cloudflare?

replies(20): >>42954540 #>>42954566 #>>42954576 #>>42954719 #>>42954753 #>>42954770 #>>42954846 #>>42954917 #>>42954977 #>>42955107 #>>42955135 #>>42955479 #>>42956166 #>>42956201 #>>42956652 #>>42957837 #>>42958038 #>>42958248 #>>42963387 #>>42964892 #
nijave ◴[] No.42954917[source]
Small/medium SaaS. Had ~8 hours of 100k reqs/sec last year when we usually see 100-150 reqs/sec. Moved everything behind a Cloudflare Enterprise setup and ditched AWS Client Access VPN (OpenVPN) for Cloudflare WARP

I've only been here 1.5 years but sounds like we usually see 1 decent sized DDoS a year plus a handful of other "DoS" usually AI crawler extensions or 3rd parties calling too aggressively

There are some extensions/products that create a "personal AI knowledge base" and they'll use the customers login credentials and scrape every link once an hour. Some links are really really resource intensive data or report requests that are very rare in real usage

replies(1): >>42955030 #
1. gamegod ◴[] No.42955030[source]
Did you put rate limiting rules on your webserver?

Why was that not enough to mitigate the DDoS?

replies(4): >>42955331 #>>42955430 #>>42955462 #>>42957537 #
2. danielheath ◴[] No.42955331[source]
Not the same poster, but the first "D" in "DDoS" is why rate-limiting doesn't work - attackers these days usually have a _huge_ (tens of thousands) pool of residential ip4 addresses to work with.
replies(2): >>42958273 #>>42960174 #
3. ◴[] No.42955430[source]
4. hombre_fatal ◴[] No.42955462[source]
That might have been good for preventing someone from spamming your HotScripts guestbook in 2005, but not much else.
5. nijave ◴[] No.42957537[source]
We had rate limiting with Istio/Envoy but Envoy was using 4-8x normal memory processing that much traffic and crashing.

The attacker was using residential proxies and making about 8 requests before cycling to a new IP.

Challenges work much better since they use cookies or other metadata to establish a client is trusted then let requests pass. This stops bad clients at the first request but you need something more sophisticated than a webserver with basic rate limiting.

replies(1): >>42959462 #
6. chillfox ◴[] No.42958273[source]
They were talking about logged in accounts, so you would group by accounts for the rate limiting and not by ip addresses.
replies(1): >>42964556 #
7. Aachen ◴[] No.42959462[source]
> The attacker was using residential proxies and making about 8 requests before cycling to a new IP.

So how is Cloudflare supposed to distinguish legitimate new visitors from new attack IPs if you can't?

Because it matches my experience as a cloudflare user perfectly if the answer were "they can't"

replies(1): >>42964552 #
8. rixed ◴[] No.42960174[source]
Is ten of thousands a big number again?
replies(1): >>42978827 #
9. nijave ◴[] No.42964552{3}[source]
Captcha/challenges and tracking users/IP rep across the web

They also do IP and request risk scores using massive piles of data they've collected

10. nijave ◴[] No.42964556{3}[source]
They were unauthenticated requests making GETs to the login page
11. danielheath ◴[] No.42978827{3}[source]
Depends. Ten thousand what?

I work on a "pretty large" site (was on the alexa top 10k sites, back when that was a thing), and we see about 1500 requests per second. That's well over 10k concurrent users.

Adding 10k requests per second would almost certainly require a human to respond in some fashion.

Each IP making one request per second is low enough that if we banned IPs which exceeded it, we'd be blocking home users who opened a couple of tabs at once. However, since eg universities / hospitals / big corporations typically use a single egress IP for an entire facility, we actually need the thresholds to be more like 100 requests per second to avoid blocking real users.

10k IP addresses making 100 requests per second (1 million req/s) would overwhelm all but the highest-scale systems.