Most active commenters
  • shwouchk(6)
  • Dylan16807(5)
  • nijave(4)
  • johnmaguire(4)
  • sophacles(4)
  • Aachen(3)

←back to thread

1343 points Hold-And-Modify | 69 comments | | HN request time: 0.858s | source | bottom

Hello.

Cloudflare's Browser Intergrity Check/Verification/Challenge feature used by many websites, is denying access to users of non-mainstream browsers like Pale Moon.

Users reports began on January 31:

https://forum.palemoon.org/viewtopic.php?f=3&t=32045

This situation occurs at least once a year, and there is no easy way to contact Cloudflare. Their "Submit feedback" tool yields no results. A Cloudflare Community topic was flagged as "spam" by members of that community and was promptly locked with no real solution, and no official response from Cloudflare:

https://community.cloudflare.com/t/access-denied-to-pale-moo...

Partial list of other browsers that are being denied access:

Falkon, SeaMonkey, IceCat, Basilisk.

Hacker News 2022 post about the same issue, which brought attention and had Cloudflare quickly patching the issue:

https://news.ycombinator.com/item?id=31317886

A Cloudflare product manager declared back then: "...we do not want to be in the business of saying one browser is more legitimate than another."

As of now, there is no official response from Cloudflare. Internet access is still denied by their tool.

1. ai-christianson ◴[] No.42954365[source]
How many of you all are running bare metal hooked right up to the internet? Is DDoS or any of that actually a super common problem?

I know it happens, but also I've run plenty of servers hooked directly to the internet (with standard *nix security precautions and hosting provider DDoS protection) and haven't had it actually be an issue.

So why run absolutely everything through Cloudflare?

replies(20): >>42954540 #>>42954566 #>>42954576 #>>42954719 #>>42954753 #>>42954770 #>>42954846 #>>42954917 #>>42954977 #>>42955107 #>>42955135 #>>42955479 #>>42956166 #>>42956201 #>>42956652 #>>42957837 #>>42958038 #>>42958248 #>>42963387 #>>42964892 #
2. uniformlyrandom ◴[] No.42954540[source]
Most exploits target the software, not the hardware. CF is a good reverse proxy.
3. raffraffraff ◴[] No.42954566[source]
They make it easy to delegate a DNS zone to them and use their API to create records (eg: install external-dns on kubernetes and key it create records automatically for ingresses)
4. codexon ◴[] No.42954719[source]
It is common once your website hits a certain threshold in popularity.

If you are just a small startup or a blog, you'll probably never see an attack.

Even if you don't host anything offensive you can be targeted by competitors, blackmailed for money, or just randomly selected by a hacker to test the power of their botnet.

5. progmetaldev ◴[] No.42954753[source]
Web scraping without any kind of sleeping in between requests (usually firing many threads at once), as well as heavy exploit scanning is a near constant for most websites. With AI technology, it's only getting worse, as vendors attempt to bring in content from all over the web without regard for resource usage. Depending on the industry, DDoS can be very common from competitors that aren't afraid to rent out botnets to boost their business and tear down those they compete against.
6. rpgwaiter ◴[] No.42954770[source]
It’s free unless you’re rolling in traffic, it’s extremely easy to setup, and CF can handle pretty much all of your infra with tools way better than AWS.

Also you can buy a cheaper ipv6 only VPS and run it thru free CF proxy to allow ipv4 traffic to your site

replies(1): >>42955517 #
7. grishka ◴[] No.42954846[source]
> How many of you all are running bare metal hooked right up to the internet?

I do. Many people I know do. In my risk model, DDoS is something purely theoretical. Yes it can happen, but you have to seriously upset someone for it to maybe happen.

replies(1): >>42955467 #
8. nijave ◴[] No.42954917[source]
Small/medium SaaS. Had ~8 hours of 100k reqs/sec last year when we usually see 100-150 reqs/sec. Moved everything behind a Cloudflare Enterprise setup and ditched AWS Client Access VPN (OpenVPN) for Cloudflare WARP

I've only been here 1.5 years but sounds like we usually see 1 decent sized DDoS a year plus a handful of other "DoS" usually AI crawler extensions or 3rd parties calling too aggressively

There are some extensions/products that create a "personal AI knowledge base" and they'll use the customers login credentials and scrape every link once an hour. Some links are really really resource intensive data or report requests that are very rare in real usage

replies(1): >>42955030 #
9. matt_heimer ◴[] No.42954977[source]
Yes, [D]DoS is a problem. Its not uncommon for a single person with residential fiber to have more bandwidth than your small site hosted on a 1u box or VPS. Either your bandwidth is rate limited and they can denial of service your site or your bandwidth is greater but they can still cause you to go over your allocation and cause massive charges.

In the past you could ban IPs but that's not very useful anymore.

The distributed attacks tend to be AI companies that assume every site has infinite bandwidth and their crawlers tend to run out of different regions.

Even if you aren't dealing with attacks or outages, Cloudflare's caching features can save you a ton of money.

If you haven't used Cloudflare, most sites only need their free tier offering.

It's hard to say no to a free service that provides feature you need.

Source: I went over a decade hosting a site without a CDN before it became too difficult to deal with. Basically I spent 3 days straight banning ips at the hosting company level, tuning various rate limiting web server modules and even scaling the hardware to double the capacity. None of it could keep the site online 100% of the time. Within 30 mins of trying Cloudflare it was working perfectly.

replies(2): >>42955258 #>>42959421 #
10. gamegod ◴[] No.42955030[source]
Did you put rate limiting rules on your webserver?

Why was that not enough to mitigate the DDoS?

replies(4): >>42955331 #>>42955430 #>>42955462 #>>42957537 #
11. motiejus ◴[] No.42955107[source]
I've been running jakstys.lt (and subdomains like git.jakstys.lt) from my closet, a simple residential connection with a small monthly price for a static IP.

The only time I had a problem was when gitea started caching git bundles of my Linux kernel mirror, which bots kept downloading (things like a full targz of every commit since 2005). Server promptly went out of disk space. I fixed gitea settings to not cache those. That was it.

Not ever ddos. Or I (and uptimerobot) did not notice it. :)

12. Puts ◴[] No.42955135[source]
Most (D)DOS attacks are just either UDP floods or SYN floods that iptables will handle without any problem. Sometimes what people think are DDOS is just their application DDOSing themself because they are doing recursive calls to some back-end micro-service.

If it was actually a traffic based DDOS someone still needs to pay for that bandwidth which would be too expansive for most companies anyway - even if it kept your site running.

But you can sell a lot of services to incompetent people.

replies(2): >>42955503 #>>42956007 #
13. johnmaguire ◴[] No.42955258[source]
> It's hard to say no to a free service that provides feature you need.

Very true! Though you still see people who are surprised to learn that CF DDOS protection acts as a MITM proxy and can read your traffic plaintext. This is of course by design, to inspect the traffic. But admittedly, CF is not very clear about this in the Admin Panel or docs.

Places one might expect to learn this, but won't:

- https://developers.cloudflare.com/dns/manage-dns-records/ref...

- https://developers.cloudflare.com/fundamentals/concepts/how-...

- https://imgur.com/a/zGegZ00

replies(1): >>42955969 #
14. danielheath ◴[] No.42955331{3}[source]
Not the same poster, but the first "D" in "DDoS" is why rate-limiting doesn't work - attackers these days usually have a _huge_ (tens of thousands) pool of residential ip4 addresses to work with.
replies(2): >>42958273 #>>42960174 #
15. ◴[] No.42955430{3}[source]
16. hombre_fatal ◴[] No.42955462{3}[source]
That might have been good for preventing someone from spamming your HotScripts guestbook in 2005, but not much else.
17. maples37 ◴[] No.42955467[source]
From my experience, if you tick off the wrong person, the threshold for them starting a DDoS is surprisingly low.

A while ago, my company was hiring and conducting interviews, and after one candidate was rejected, one of our sites got hit by a DDoS. I wasn't in the room when people were dealing with it, but in the post-incident review, they said "we're 99% sure we know exactly who this came from".

replies(1): >>42961885 #
18. buyucu ◴[] No.42955479[source]
DDoS is a problem, but for most ordinary problems it's not as bad as people make it out to be. Even something very simple like fail2ban will go a long way.
19. hombre_fatal ◴[] No.42955503[source]
You need an answer to someone buying $10 of booter time and sending a volumetric attack your way. If any of the traffic is even reaching your server, you've already lost, so iptables isn't going to help you because your link is saturated.

Cloudflare offers protection for free.

20. zelphirkalt ◴[] No.42955517[source]
Easy to set up, easy to screw up user experience. Easy-peasy.
21. sophacles ◴[] No.42955969{3}[source]
How would you do DDoS protection without having something in path?
replies(2): >>42956173 #>>42962295 #
22. sophacles ◴[] No.42956007[source]
What's the iptables invocation that will let my 10Gbps connection drop a a 100Gbps syn flood while also serving good traffic?
replies(2): >>42958858 #>>42964522 #
23. professorsnep ◴[] No.42956166[source]
I run a Mediawiki instance for an online community on a fairly cheap box (not a ton of traffic) but had a few instances of AI bots like Amazon's crawling a lot of expensive API pages thousands of times an hour (despite robots.txt preventing those). Turned on Cloudflare's bot blocking and 50% of total traffic instantly went away. Even now, blocked bot requests make up 25% of total requests to the site. Without blocking I would have needed to upgrade quite a bit or play a tiring game of whack a mole blocking any new IP ranges for the dozens of bots.
replies(3): >>42957695 #>>42960667 #>>42971791 #
24. johnmaguire ◴[] No.42956173{4}[source]
I hoped it was apparent from my comment that "this is of course by design, to inspect the traffic" meant I understood they are doing it to detect DDoS traffic and separate it from legitimate traffic. But many Cloudflare users are not so technical. I would simply advocate for being more upfront about this behavior.

That said, their Magic Transit and Spectrum offerings (paid) provide L3/L4 DDoS protection without payload inspection.

replies(1): >>42956565 #
25. blablabla123 ◴[] No.42956201[source]
The biggest problems I see with DDoS is metered traffic and availability. The largest Cloud providers all meter their traffic.

The availability part on the other hand is maybe something that's not so business critical for many but for targeted long-term attacks it probably is.

So I think for some websites, especially smaller ones it's totally feasible to not use Cloudflare but involves planning the hosting really carefully.

26. sophacles ◴[] No.42956565{5}[source]
Honestly, I was confused because both pages you link are full of the word proxy, have links to deeper discussions of what a proxy does (including explicit mentions of decryption/re-encryption), and are literally developer docs. Additionally Cloudflare's blog explaining these things in depth are high in search results, and also make the front page here on the regular.

I incorrectly interpreted your comment as one of the multitude of comments claiming nefarious reasons for proxying without any thought for how an alternative would work.

Magic Transit is interesting - hard to imagine how it would scale down to a small site though, they apparently advertise whole prefixes over BGP, and most sites don't even have a dedicated IP, let alone a whole /24 to throw around.

replies(1): >>42956659 #
27. megous ◴[] No.42956652[source]
I also rely on hosting provider DDoS protection and don't use very intrusive protection like Cloudflare.

Only issues I had to deal with are when someone finds some slow endpoint, and manages to overload the server with it, and my go to approach is to optimize it to max <10-20ms response time, while blocking the source of traffic if it keeps being too annoying after optimization.

And this happened like 2-3 times over 20 years of hosting the eshop.

Much better than exposing users to CF or likes of it.

28. johnmaguire ◴[] No.42956659{6}[source]
I understand your sentiment, as I reacted similarly the first time someone brought this to my attention. However, after logging into my Cloudflare account, viewing the DNS record page, and attempting to find any mention of SSL decryption, and then clicking on related docs pages (and links from them!) I was still unable to find this information.

You're right that Cloudflare has written many high-quality blog posts on the workings of the Internet, and the inner workings at Cloudflare. Amusingly, they even at times criticize HTTPS interception (not their use of it) and offer a tool to detect: https://blog.cloudflare.com/monsters-in-the-middleboxes/

I still believe that this information should be displayed to the relevant user configuring the service.

There are many types of proxies, and MITM decryption is not an inherent part of a proxy. The linked page from the Admin Panel is https://developers.cloudflare.com/dns/manage-dns-records/ref... and links to pages like "How Cloudflare works" (https://developers.cloudflare.com/fundamentals/concepts/how-...) which still do not mention HTTPS interception. It sounds like you found a link I didn't. In the past someone argued that I should've looked here: https://developers.cloudflare.com/data-localization/faq/#are...

But if you look closer, those are docs for the Data Localization Suite, an Enterprise-only paid addon.

replies(1): >>42957731 #
29. nijave ◴[] No.42957537{3}[source]
We had rate limiting with Istio/Envoy but Envoy was using 4-8x normal memory processing that much traffic and crashing.

The attacker was using residential proxies and making about 8 requests before cycling to a new IP.

Challenges work much better since they use cookies or other metadata to establish a client is trusted then let requests pass. This stops bad clients at the first request but you need something more sophisticated than a webserver with basic rate limiting.

replies(1): >>42959462 #
30. CGamesPlay ◴[] No.42957695[source]
How do you feel, knowing that some portion of the 25% “detected bot traffic” are actually people in this comment thread?
31. shwouchk ◴[] No.42957731{7}[source]
cloudflare is primarily a caching proxy. in order to perform any caching, they would have to have the unencrypted objects. check, mate.

It is sad that in this day and age, when you buy a car you need to sign a legal exclaimer that you understand it requires gasoline to run.

replies(1): >>42958361 #
32. johnklos ◴[] No.42957837[source]
I've been hosting web sites on my own bare metal in colo for more than 25 years. In all that time I've dealt with one DDoS that was big enough to bring everything down, and that was because of a specific person being pissed at another specific person. The attacker did jail time for DDoS activities.

Every other attempt at DDoS has been ineffective, has been form abuse and credential stuffing, has been generally amateurish enough to not take anything down.

I host (web, email, shells) lots of people including kids (young adults) who're learning about the Internet, about security, et cetera, who do dumb things like talk shit on irc. You'd think I'd've had more DDoS attacks than that rather famous one.

So when people assert with confidence that the Internet would fall over if companies like Cloudflare weren't there to "protect" them, I have to wonder how Cloudflare marketed so well that these people believe this BS with no experience. Sure, it could be something else, like someone running Wordpress with a default admin URL left open who makes a huge deal about how they're getting "hacked", but that wouldn't explain all the Cloudflare apologists.

Cloudflare wants to be a monopoly. They've shown they have no care in the world for marginalized people, whether they're people who don't live in a western country or people who simply prefer to not run mainstream OSes and browsers. They protect scammers because they make money from scammers. So why would people want to use them? That's a very good question.

replies(2): >>42958872 #>>42962313 #
33. porty ◴[] No.42958038[source]
I would feel pretty safe running my own hand-written services against the raw Internet, but if I was to host Wordpress or other large/complicated/legacy codebases I'd start to get worried. Also the CDN aspect is useful - having lived in Australia you like connections that don't have to traverse continents for every request.
34. itomato ◴[] No.42958248[source]
Check your logs, you might be surprised.
35. chillfox ◴[] No.42958273{4}[source]
They were talking about logged in accounts, so you would group by accounts for the rate limiting and not by ip addresses.
replies(1): >>42964556 #
36. johnmaguire ◴[] No.42958361{8}[source]
Cloudflare's CDN capabilities are separate from DDOS protection and indeed many requests cannot be cached due to the resources being sensitive (i.e. authenticated requests.)

Again, there are many forms of proxies and DDOS protection that do not rely on TLS interception, just as there are cars that do not rely on gasoline. Cloudflare has many less technical home users who use their service to avoid sharing their IP online, avoid DDOS, or access home resources. I do not think the average Internet user is familiar with these concepts. There are many examples of surprised users on subreddits like /r/homelab.

replies(1): >>42958449 #
37. shwouchk ◴[] No.42958449{9}[source]
how would they know what to cache? the response headers from the server are encrypted. there is maybe the high end l3 protection available if you have the resources. the free tier has caching bundled.

Also, how would their certificates work if they don’t see content?

replies(1): >>42959705 #
38. truetraveller ◴[] No.42958858{3}[source]
xdp
39. mvdtnz ◴[] No.42958872[source]
I'm sorry but lumping in people who prefer to use a weird browser with "marginalised people" does not help your credibility.
replies(2): >>42959455 #>>42966194 #
40. Aachen ◴[] No.42959421[source]
> not uncommon for a single person with residential fiber to have more bandwidth than your small site hosted on a 1u box or VPS.

Then self host from your connection at home, don't pay for the VPS :). That's what I've been doing for over a decade now and still never saw a (D)DoS attack

50 mbps has been enough to host various websites, including one site that allows several gigabytes of file upload unauthenticated for most of the time that I self host. Must say that 100 mbps is nicer though, even if not strictly necessary. Well, more is always nicer but returns really diminish after 100 (in 2025, for my use case). Probably it's different if you host videos, a Tor relay, etc. I'm just talking normal websites

replies(1): >>42961052 #
41. Aachen ◴[] No.42959455{3}[source]
What bit do you mean specifically? As a fellow web hoster, who also hosted kids before (from a game making forum), I can fully corroborate what they're saying
replies(1): >>42959764 #
42. Aachen ◴[] No.42959462{4}[source]
> The attacker was using residential proxies and making about 8 requests before cycling to a new IP.

So how is Cloudflare supposed to distinguish legitimate new visitors from new attack IPs if you can't?

Because it matches my experience as a cloudflare user perfectly if the answer were "they can't"

replies(1): >>42964552 #
43. Dylan16807 ◴[] No.42959705{10}[source]
> how would they know what to cache?

That's a weird question to ask to someone that went out of their way to describe a non-caching situation.

> Also, how would their certificates work if they don’t see content?

Can you be more specific? I'm not sure which feature you're asking about or how it uses certificates.

But the answer is likely "that feature isn't necessary to provide DDOS protection".

replies(1): >>42959936 #
44. mvdtnz ◴[] No.42959764{4}[source]
Clearly you didn't even read his post (or mine) if you're asking. I'm obviously referring to

> Cloudflare wants to be a monopoly. They've shown they have no care in the world for marginalized people, whether they're people who don't live in a western country or people who simply prefer to not run mainstream OSes and browsers.

45. shwouchk ◴[] No.42959936{11}[source]
Sorry, they did not go much out of their way, to simply claim “solutions exist”. Sure, you could invent other ways of protecting your traffic but what CF offers in the free tier always includes SSL termination with their own certificates (if you enable ssl), and always includes caching.
replies(1): >>42960530 #
46. rixed ◴[] No.42960174{4}[source]
Is ten of thousands a big number again?
replies(1): >>42978827 #
47. Dylan16807 ◴[] No.42960530{12}[source]
> invent other ways

Just turning off some features gets them just about there. It wouldn't take rearchitecting things. Those features being bundled by default means very little for the difficulty.

replies(1): >>42966532 #
48. mrweasel ◴[] No.42960667[source]
AI bots are a huge issue for a lot of sites. Just putting intentional DDoS attacks aside, AI scrapers can frequently tip over a site because many of them don't know how to back off. Google is an exception really, their experience with creating GoogleBot as ensured that they are never a problem.

Many of the AI scrapers don't identify themselves, they live on AWS, Azure, Alibaba Cloud, and Tencent Cloud, so you can't really block them and rate limiting also have limited effect as they just jump to new IPs. As a site owner, you can't really contact AWS and ask them to terminate their customers service in order for you to recover.

49. lucumo ◴[] No.42961052{3}[source]
> 50 mbps has been enough to host various websites,

Bandwidth hasn't been a limiting factor for years for me.

But generating dynamic pages can bring just enough load for it to get painful. Just this week I had to blacklist Meta's ridiculously overactive bot sending me more requests per second than all my real users do in an hour. Meta and ClaudeBot have been causing intermittent overloads for weeks now.

They now get 403s because I'm done trying to slow them down.

50. Loughla ◴[] No.42961885{3}[source]
What the hell is wrong with people? Honestly the lack of substantive human interaction in a lot of folks' lives, except via the Internet, is a real problem.

Take that story for instance. Here's how that goes in the physical world, just to show how unbelievably ridiculous it is.

So you didn't get the job? What's your next step?

I'll stop by their office and keep people from entering the front doors by running around in front of them. That'll show those bastards.

replies(1): >>42964886 #
51. 1oooqooq ◴[] No.42962295{4}[source]
many ways but they are not plug and play so they would lose a few clients... but that is irrelevant as snooping trafic is their real businnes model.
replies(1): >>42965403 #
52. systems_glitch ◴[] No.42962313[source]
Same basic experience. The colo ISP soaks up most actual DDoS. We had a couple mid-sized ones when we were hosting irc.binrev.net from salty b& users. No real effect other than the colo did let us know it was happening and that it was "not a significant amount of DDoS by our standards."
53. betaby ◴[] No.42963387[source]
Other comments say that DDoS are common, not my experience though. I run a couple of API/SAAS sites and DDoSes are rare. Sites are in Canada and Brazil if that matters, although I won't disclose what data-centers. Most strange thing is that no one demanded any ransom during those DDoS attacks ever. Just some flooding for 1-2 days. Most of the times I did't even care - servers are on 10G ports and I pay 95% percentile for the traffic with a cap on final bill. Sites are geo-fenced by nftables rules, only countries of interest are allowed.
54. Puts ◴[] No.42964522{3}[source]
The point with a syn flood is to try to saturate the OS limit for open sockets. From an attackers perspective the whole point of a syn flood is to do a DOS without needing much bandwidth.

My experience form 15 years working in the hosting industry is that volumetric attacks are extremely rare but customers that turn to Cloudflare as a solution are more often than not DDOS-ing them self because of bad configured systems, but their junior developers lack any networking troubleshooting skills.

55. nijave ◴[] No.42964552{5}[source]
Captcha/challenges and tracking users/IP rep across the web

They also do IP and request risk scores using massive piles of data they've collected

56. nijave ◴[] No.42964556{5}[source]
They were unauthenticated requests making GETs to the login page
57. ◴[] No.42964886{4}[source]
58. tombert ◴[] No.42964892[source]
I run my "server" [1] straight to my home internet, and maybe I should count my blessings but I haven't had any issues with DDoS in the years I've done this.

I have relatively fast internet, so maybe it's fast enough to absorb a lot of the problems, but I've had good enough luck with some basic Nginx settings and fail2ban.

[1] a small little mini gaming PC running NixOS.

59. sophacles ◴[] No.42965403{5}[source]
What are those many ways? Help me understand - I've been doing this shit a long time and I can't think of many ways to provide what Cloudflare does in a way that is cheap, easy, and scalable without working at the HTTP layer. So please help me learn something new, what are those ways?
replies(1): >>42971608 #
60. johnklos ◴[] No.42966194{3}[source]
You're focusing on the wrong kind of pedantry.

"Marginalized" has a specific connotation, sure, but people can be marginalized for reasons other than, or in addition to, those that fit the connotation.

61. shwouchk ◴[] No.42966532{13}[source]
So you too, are saying “its possible” as proof of your argument.

Which itself shifted from complaining that you aren’t warned that coffee is hot, to - after implicitly agreeing that it should be obvious it’s hot - complaining that it they didn’t have to make it as hot.

Great! Offer an alternative! Everyone would be more than happy.

replies(1): >>42968339 #
62. Dylan16807 ◴[] No.42968339{14}[source]
Not that it's "possible", that it requires them to add nothing new.

That is a much much easier to reach bar.

It's like if a restaurant sells cheeseburgers, and I want a hamburger. "How do they figure out ~~what~to~cache~~ the cheese to ketchup ratio without adding cheese?" They can just skip that part. I'm not asking for sushi and supporting that by saying "sushi is possible".

replies(1): >>42976224 #
63. 1oooqooq ◴[] No.42971608{6}[source]
offer a l2 load balancer that act as a queue. if the site decides its a dos/bad request it sends either a dowgraded response the load balancer can read or a side channel comms. then the load balancer drop everything from that ip or other identifiable patterns based only on l2 info.

there are many others. just buy a book for industries that value privacy or pay someone.

64. account42 ◴[] No.42971791[source]
You don't need buttflare's mistery juice to rate-limit or block bad users.
65. shwouchk ◴[] No.42976224{15}[source]
So you agree that your argument has shifted from complaining about inadequate disclosure that coffee contains caffeine, to complaints about lack of decaf offerings.

It would also be trivial for google and facebook to turn off all ads and logging of your activity. They would need to do strictly less than they do now. It would benefit all users too!

In CF case they would have to build a completely different infrastructure to detect bots using different technology to what they have now, including different ways around false positives for legitimate users. While perhaps nothing new in the sense that you claim “this is possible”, i see no one else offering this mythical “possible” product.

I would be the first in line to your offering of free cheeseless hamburgers. Where do i sign up?

replies(1): >>42976357 #
66. Dylan16807 ◴[] No.42976357{16}[source]
> So you agree that your argument has shifted from complaining about inadequate disclosure that coffee contains caffeine, to complaints about lack of decaf offerings.

My argument has never shifted.

But the reason the argument shifted was because someone specifically asked about how you'd do DDoS protection without those downsides.

And you continued asking how it could be done.

> It would also be trivial for google and facebook to turn off all ads and logging of your activity. They would need to do strictly less than they do now. It would benefit all users too!

Isn't cloudflare supposedly not tracking private information in the websites they proxy...? If you think they make money off it, that's pretty bad...

> In CF case they would have to build a completely different infrastructure to detect bots using different technology to what they have now, including different ways around false positives for legitimate users.

I disagree.

> I would be the first in line to your offering of free cheeseless hamburgers. Where do i sign up?

First you need to put me into a situation where my business can compete with cloudflare while doing exactly the same things they do. Then I will be happy to comply with that request.

The hard part of this situation is not the effect of that tiny change on profitability, it's getting into a position where I can make that change.

replies(1): >>42978587 #
67. shwouchk ◴[] No.42978587{17}[source]
> Isn't cloudflare supposedly not tracking private information in the websites they proxy...?

They are at the very least tracking the users and using that tracking as part of the heuristics they use in their product.

Whether they sell the data for marketing, i don’t know, hopefully not but conceivably, yes.

To which, > I disagree.

Yes, we’ve established that you disagree and explicitly claim “it’s possible to offer ddos protection without mitm”

and now further that “dropping the extra feature of caching” would not adversely affect their technology or their business”

Great, claims though entirely unsupported and in the latter case obviously false if you know anything about how it works.

In particular, they would need to sponsor the free accounts via much poorer economies of scale due to not being able to cache anything, and would not help at all with a “legitimate ddos” such as being on the front page here

replies(1): >>42978989 #
68. danielheath ◴[] No.42978827{5}[source]
Depends. Ten thousand what?

I work on a "pretty large" site (was on the alexa top 10k sites, back when that was a thing), and we see about 1500 requests per second. That's well over 10k concurrent users.

Adding 10k requests per second would almost certainly require a human to respond in some fashion.

Each IP making one request per second is low enough that if we banned IPs which exceeded it, we'd be blocking home users who opened a couple of tabs at once. However, since eg universities / hospitals / big corporations typically use a single egress IP for an entire facility, we actually need the thresholds to be more like 100 requests per second to avoid blocking real users.

10k IP addresses making 100 requests per second (1 million req/s) would overwhelm all but the highest-scale systems.

69. Dylan16807 ◴[] No.42978989{18}[source]
> They are at the very least tracking the users and using that tracking as part of the heuristics they use in their product.

They can do that without seeing the proxied contents. So your analogy to asking facebook or google to stop ads and tracking is completely broken.

> and now further that “dropping the extra feature of caching” would not adversely affect their technology or their business”

Yes. (Well, it was stated much earlier but I guess you didn't notice until now?) You're the one saying it would be a problem, do you have anything to back that up?

> in the latter case obviously false if you know anything about how it works.

Caching costs a bunch of resources and still uses lots of bandwidth, what's so obvious about it? And cloudflare users can already cache-bust at will, so it's not exactly something they're worried about.

https://developers.cloudflare.com/cache/how-to/cache-rules/s...

> would not help at all with a “legitimate ddos” such as being on the front page here

Which is not the scenario people were worrying about.

And an average web server can handle that.