Cloudflare's Browser Intergrity Check/Verification/Challenge feature used by many websites, is denying access to users of non-mainstream browsers like Pale Moon.
Users reports began on January 31:
https://forum.palemoon.org/viewtopic.php?f=3&t=32045
This situation occurs at least once a year, and there is no easy way to contact Cloudflare. Their "Submit feedback" tool yields no results. A Cloudflare Community topic was flagged as "spam" by members of that community and was promptly locked with no real solution, and no official response from Cloudflare:
https://community.cloudflare.com/t/access-denied-to-pale-moo...
Partial list of other browsers that are being denied access:
Falkon, SeaMonkey, IceCat, Basilisk.
Hacker News 2022 post about the same issue, which brought attention and had Cloudflare quickly patching the issue:
https://news.ycombinator.com/item?id=31317886
A Cloudflare product manager declared back then: "...we do not want to be in the business of saying one browser is more legitimate than another."
As of now, there is no official response from Cloudflare. Internet access is still denied by their tool.
What are you protecting cloudflare?
Also they show those captchas when going to robots.txt... unbelievable.
I know it happens, but also I've run plenty of servers hooked directly to the internet (with standard *nix security precautions and hosting provider DDoS protection) and haven't had it actually be an issue.
So why run absolutely everything through Cloudflare?
It is either that or keep sending data back to the Meta and Co. overlords despite me not being a Facebook, Instagram, Whatsapp user...
Google itself tried to push crap like Web Environment Integrity (WEI) so websites could verify "authentic" browsers. We got them to stop it (for now) but there was already code in the Chromium sources. What makes CloudFlare MITMing and blocking/punishing genuine users from visiting websites?
Why are we trusting CloudFlare to be a "good citizen" and not block unfairly/annoy certain people for whatever reason? Or even worse, serve modified content instead of what the actual origin is serving? I mean in the cases where CloudFlare re-encrypts the data, instead of only being a DNS provider. How can we trust that not third party has infiltrated their systems and compromised them? Except "just trust me bro", of course
If you are just a small startup or a blog, you'll probably never see an attack.
Even if you don't host anything offensive you can be targeted by competitors, blackmailed for money, or just randomly selected by a hacker to test the power of their botnet.
Also you can buy a cheaper ipv6 only VPS and run it thru free CF proxy to allow ipv4 traffic to your site
I witnessed this! Last time I checked, in the default config, the connection between cloudflare and the origin server does not do strict TLS cert validation. Which for an active-MITM attacker is as good as no TLS cert validation at all.
A few years ago an Indian ISP decided that https://overthewire.org should be banned for hosting "hacking" content (iirc). For many Indian users, the page showed a "content blocked" page. But the error page had a padlock icon in the URL bar and a valid TLS cert - said ISP was injecting it between Cloudflare and the origin server using a self-signed cert, and Cloudflare was re-encrypting it with a legit cert. In this case it was very conspicuous, but if the tampering was less obvious there'd be no way for an end-user to detect the MITM.
I don't have any evidence on-hand, but iirc there were people reporting this issue on Twitter - somewhere between 2019 and 2021, maybe.
I do. Many people I know do. In my risk model, DDoS is something purely theoretical. Yes it can happen, but you have to seriously upset someone for it to maybe happen.
I agree that this exposes the risk of relying overmuch on handful of large, opaque, unaccountable companies. And as long as Cloudflare's customers are web operators (rather than users), there isn't a lot of incentive for them to be concerned about the user if their customers aren't.
One idea might be to approach web site operators who use Cloudflare and whose sites trigger these captchas more than you'd like. Explain the situation to the web site operator. If the web site operator cares enough about you, they might complain to Cloudflare. And if not, well, you have your answer.
If there were an alternative that would provide the same benefits at roughly the same cost, I would definitely be willing to take a look, even if it meant I needed to spend some time learning a different way to configure the service from the way I configure Cloudflare.
I've only been here 1.5 years but sounds like we usually see 1 decent sized DDoS a year plus a handful of other "DoS" usually AI crawler extensions or 3rd parties calling too aggressively
There are some extensions/products that create a "personal AI knowledge base" and they'll use the customers login credentials and scrape every link once an hour. Some links are really really resource intensive data or report requests that are very rare in real usage
In the past you could ban IPs but that's not very useful anymore.
The distributed attacks tend to be AI companies that assume every site has infinite bandwidth and their crawlers tend to run out of different regions.
Even if you aren't dealing with attacks or outages, Cloudflare's caching features can save you a ton of money.
If you haven't used Cloudflare, most sites only need their free tier offering.
It's hard to say no to a free service that provides feature you need.
Source: I went over a decade hosting a site without a CDN before it became too difficult to deal with. Basically I spent 3 days straight banning ips at the hosting company level, tuning various rate limiting web server modules and even scaling the hardware to double the capacity. None of it could keep the site online 100% of the time. Within 30 mins of trying Cloudflare it was working perfectly.
On the other, Pale Moon is an ancient (pre-quantum) volunteer-supported fork of Firefox, with boatloads of known and unfixed security bugs - some fixes might be getting merged from upstream, but for real, the codebases diverged almost a decade ago. You might as well be using IE 11.
(And if I were doing this on my own, rather than trusting Cloudflare to do it, I would almost surely decide that I don't care enough about Pale Moon users to fix an otherwise good rule that's blocking them as a side effect.)
The only time I had a problem was when gitea started caching git bundles of my Linux kernel mirror, which bots kept downloading (things like a full targz of every commit since 2005). Server promptly went out of disk space. I fixed gitea settings to not cache those. That was it.
Not ever ddos. Or I (and uptimerobot) did not notice it. :)
CAPTCHAs are barely sufficient against bots these days. I expect the first sites to start implementing Apple/Cloudflare's remote attestation as a CAPTCHA replacement any day now, and after that it's going to get harder and harder to use the web without Official(tm) Software(tm).
Using Linux isn't what's getting you blocked. I use Linux, and I'm not getting blocked. These blocks are the results of a whole range of data points, including things like IP addresses.
If it was actually a traffic based DDOS someone still needs to pay for that bandwidth which would be too expansive for most companies anyway - even if it kept your site running.
But you can sell a lot of services to incompetent people.
It's also a pretty safe assumption that Cloudflare is not run by morons, and they have access to more data than we do, by virtue of being the strip club bouncer for half the Internet.
"'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36'"
That means my browser is pretending to be Firefox AND Safari on an Intel chip.
I don't know what features Cloudflare uses to determine what browser you're on, or if perhaps it's sophisticated enough to get past the user agent spoofing, but it's all rather funny and reminiscent just the same.
Think about it this way: when a framework (many modern websites) or CAPTCHA/Challenge doesn't support an older or less common browser, it's not because someone's sitting there trying to keep people out. It's more likely they are trying to balance the maintenance costs and the hassle involved in allowing or working with whatever other many platforms there are (browsers in this case). At what point is a browser relevant? 1 user? 2 users? 100? Can you blame a company that accommodates for probably >99% of the traffic they usually see? I don't think so, but that's just me.
At the end, site owners can always look at their specific situation and decide how they want to handle it - stick with the default security settings or open things up through firewall rules. It's really up to them to figure out what works best for their users.
This hostility to normal browsing behavior makes me extremely reluctant to ever use Cloudflare on any projects.
Very true! Though you still see people who are surprised to learn that CF DDOS protection acts as a MITM proxy and can read your traffic plaintext. This is of course by design, to inspect the traffic. But admittedly, CF is not very clear about this in the Admin Panel or docs.
Places one might expect to learn this, but won't:
- https://developers.cloudflare.com/dns/manage-dns-records/ref...
- https://developers.cloudflare.com/fundamentals/concepts/how-...
The Cloudflare tool does not complete its verifications, resulting in an endless "Verifying..." loop and thus none of the websites in question can be accessed. All you get to see is Cloudflare.
Is it worth giving the internet to them? Is something so fundamentally wrong with the architecture of the internet that we need megacorps to patch the holes?
As a part of some browser fingerprinting I have access to at work, there's both commercial and free solutions to determine the actual browser being used.
It's quite easy even if you're just going off of the browser-exposed properties. You just check the values against a prepopulated table. You can see some of such values here: https://amiunique.org/fingerprint
Edit: To follow up, one of the leading fingerprinting libraries just ignores useragent and uses functionality testing as well: https://github.com/fingerprintjs/fingerprintjs/blob/master/s...
I forgot the script open, polling for about 20 minutes, and suddenly it started working.
So even sending all the same headers as Firefox, but with cURL, CF seemed to detect automated access, and then eventually allowed it through anyway after it saw I was only polling once a minute. I found this rather impressive. Are they using subtle timings? Does cURL have an easy-to-spot fingerprint outside of its headers?
Reminded me of this attack, where they can detect when a script is running under "curl | sh" and serve alternate code versus when it is read in the browser: https://news.ycombinator.com/item?id=17636032
It's probably dependent on the security settings the site owner has choosen. I'm guessing bot fight mode might cause the issue.
Absolutely true. But the programmers of these bots are lazy and often don't. So if Cloudflare has access to other data that can positively identify bots, and there is a high correlation with a particular user agent, well then it's a good first-pass indication despite collateral damage from false positives.
"Challenges are not supported by Microsoft Internet Explorer."
Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers, as defined by Cloudflare presumably. That wouldn't sound too legal, honestly.
Below that: "Visitors must enable JavaScript and cookies on their browser to be able to pass any type of challenge."
These conditions are met.
On one hand, I get the annoying "Verify" box every time I use ChatGPT (and now due its popularity, DeepSeek as well).
On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.
I honestly don't know what the solution is.
A while ago, my company was hiring and conducting interviews, and after one candidate was rejected, one of our sites got hit by a DDoS. I wasn't in the room when people were dealing with it, but in the post-incident review, they said "we're 99% sure we know exactly who this came from".
Cloudflare offers protection for free.
I'm unsure what part of this isn't clear, major browsers, as long as they are up to date, are supported and should always pass challenges. Palemoon isn't a major browser, neither are the other browsers mentioned on the thread.
> * Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers *
Challenge pages is what your browser is struggling to pass, you aren't seeing a block page or a straight up denying of the connection, instead, the challenge isn't passing because whatever update CF has done, has clearly broken the compatibility with Palemoon, I seriously doubt this was on purpose. Regarding those annoying challenge pages, these aren't meant to be used 24/7 as they are genuinely annoying, if you are seeing challenge pages more often than you are on chrome, its likely that the site owner is actively is flagging your session to be challenged, they can undo this by adjusting their firewall rules.
If a site owner decides to enable challenge pages for every visitor, you should shift the blame on the site owners lack of interest in properly tunning their firewall.
If you really do have a better way to make all legitimate users of sites happy with bot protections then by all means there is a massive market for this. Unfortunately you're probably more like me, stuck between a rock and a hard place of being in a situation where we have no good solution and just annoyance with the way things are.
Robots went out of control, whether malicious or the AI scrapers or the Clearview surveillance kind; users learned to not trust random websites; SEO spam ruined search, the only thing that made a decentralized internet navigable; nation state attacks became a common occurrence; people prefer a few websites that do everything (Facebook becoming an eBay competitor). Even if it were possible to set rules banning Clearview or AI training, no nation outside of your own will follow them; an issue which even becomes a national security problem (are you sure, Taiwan, that China hasn't profiled everyone on your social media platforms by now?)
There is no solution. The dream itself was not sustainable. The only solution is either a global moratorium of understanding which everyone respectfully follows (wishful thinking, never happening); or splinternetting into national internets with different rules and strong firewalls (which is a deal with the devil, and still admitting the vision failed).
Yup!
> I honestly don't know what the solution is.
Force law enforcement to enforce the laws.
Or else, block the countries that don't combat fraud. That means... China? Hey isn't there a "trade war" being "started"? It sure would be fortunate if China (and certain other fraud-friendly countries around Asia/Pacific) were blocked from the rest of the Internet until/unless they provide enforcement and/or compensation their fraudulent use of technology.
What usually works for me is to close the browser, reload, and try again.
If it's a https URL: Yes, the TLS handshake. There are curl builds[1] which try (and succeed) to imitate the TLS handshake (and settings for HTTP/2) of a normal browser, though.
There are actually hundreds of smaller chromium forks that add small features, such as built-in adblock and have no issues with neither Cloudflare nor other captchas.
The thing is that these tools are generally used to further entrench power that monopolies, duopolies, and cartels already have. Example: I've built an app that compares grocery prices as you make a shopping list, and you would not believe the extent that grocers go to to make price comparison difficult. This thing doesn't make thousands or even hundreds of requests - maybe a few dozen over the course of a day. What I thought would be a quick little project has turned out to be wildly adversarial. But now spite driven development is a factor so I will press on.
It will always be a cat and mouse game, but we're at a point where the cat has a 46 billion dollar market cap and handles a huge portion of traffic on the internet.
Though annoying, it's tolerable. It seemed like a fair solution. Blocking doesn't.
ChatGPT.com is normally quite useful for generating Cloudflare prompts, but that page doesn't seem to work in Palemoon regardless of prompts. What version browser engine does it use these days? Is it still based on Firefox?
For reference I grabbed the latest main branch of Ladybird and ran that, but Cloudflare isn't showing me any prompts for that either.
They do not - not definitively [1]. This cat-and-mouse game is stochastic at higher levels, with bots doing their best to blend in with regular traffic, and the defense trying to pick up signals barely above the noise floor. There are diminishing returns to battling bots that are indistinguishable from regular users.
1. A few weeks ago, the HN frontpage had a browser-based project that claimed to be undetectable
This usually includes people making a near-realtime updated perfect copy of your site and serving that copy for either scam or middle-manning transactions or straight fraud.
Having a clear category of "good bots" from either a verified or accepted companies would help for these cases. Cloudflare has such a system I think, but then a new search engine would have to go to each and every platform provider to make deals and that also sounds impossible.
That said, their Magic Transit and Spectrum offerings (paid) provide L3/L4 DDoS protection without payload inspection.
They ignored robots.txt (claimed not to, but I blacklisted them there and they didn't stop) and started randomly generating image paths. At some point /img/123.png became /img/123.png?a=123 or whatever, and they just kept adding parameters and subpaths for no good reason. Nginx dutifully ignored the extra parameters and kept sending the same images files over and over again, wasting everyone's time and bandwidth.
I was able to block these bots by just blocking the entire IP range at the firewall level (for Huawei I had to block all of China Telecom and later a huge range owned by Tencent for similar reasons).
I have lost all faith in scrapers. I've written my own scrapers too, but almost all of the scrapers I've come across are nefarious. Some scour the internet searching for personal data to sell, some look for websites to send hack attempts at to brute force bug bounty programs, others are just scraping for more AI content. Until the scraping industry starts behaving, I can't feel bad for people blocking these things even if they hurt small search engines.
The availability part on the other hand is maybe something that's not so business critical for many but for targeted long-term attacks it probably is.
So I think for some websites, especially smaller ones it's totally feasible to not use Cloudflare but involves planning the hosting really carefully.
To make matters worse, I suspect that not even a splinternet can save it. It needs a new foundation, preferably one that wasn't largely designed before security was a thing.
Federation is probably a good start, but it should be federated well below the application layer.
I use uptodate Firefox, and was blocked from using company gitlab for months on end simply because I disabled some useless new web API in about:config way before CF started silently requiring it without any feature testing or meningful error message for the user. Just a redirect loop. Gitlab support forum was completely useless for this, just blaming the user.
So we dropped gitlab at the company and went with basic git over https hosting + cgit, rather than pay some company that will happily block us via some user hostile intermediary without any resolution. I figured out what was "wrong" (lack of feature testing for web API features CF uses, and lack of meaningful error message feedback to the user) after the move.
The solution is good security-- Cloudflare only cuts down on the noise. I'm looking at junk requests and hacking attempts flow through to my sites as we speak.
Turnstile is the in-page captcha option, which you're right, does affect page load. But they force a defer on the loading of that JS as best they can.
Also, turnstile is a Proof of Work check, and is meant to slow down & verify would-be attack vectors. Turnstile should only be used on things like Login, email change, "place order", etc.
Because in the end, the result is connection denial. I don't want to connect to Cloudflare, I want to connect to the website.
I read that part. They still do not indicate what may happen, or what is their responsibility -if any- for visitors with non-major browsers.
Not claiming this is "on purpose" or a conspiracy, but if these legitimate protests keep getting ignored then yes, it becomes discrimination. If they can't be bothered, they should clearly state that their tool is only compatible with X browsers. Who is to blame for "an incorrectly received challenge"? The website? The user who chooses a secure, but "wrong" browser not on their whitelist?
Cloudflare is there for security, not "major browser approval pass". They have the resources to increase response times, provide better support and deal with these incompatibility issues. But do they want to? Until now, they did.
I incorrectly interpreted your comment as one of the multitude of comments claiming nefarious reasons for proxying without any thought for how an alternative would work.
Magic Transit is interesting - hard to imagine how it would scale down to a small site though, they apparently advertise whole prefixes over BGP, and most sites don't even have a dedicated IP, let alone a whole /24 to throw around.
Countries, whether it be Ukraine or Taiwan, can't risk other countries harvesting their social media platforms for the mother of all purges. I never assume that anything that happened historically can never happen again - no Polish Jew would have survived the Nazis with this kind of information theft. Add AI into the mix, and wiping out any population is as easy as baking pie.
Countries are tired of bad behavior. Just ask my grandmother, who has had her designs stolen and mass produced from China. Not just companies - many free and open source companies cannot survive with such reckless competition. Can Prusa survive a world where China takes, but never gives? How many grandmothers does it take being scammed? How many educational systems containing data on minors need to be stolen? The MPAA and RIAA has been whining for years about the copyright problem, and while we laugh at them, never underestimate them. The list goes on and on.
Startups are tired of paying Cloudflare or AWS protection money, and trying to evade the endless sea of SEO spam. How can a startup compete with Google with so much trash and no recourse? Who can build a new web browser, and be widely accepted as being a friendly visitor? Who can build a new social media platform, without the experience and scale to know who is friend or foe?
Now we have AI, gasoline and soon to be dynamite on the fire. For the first time ever, a malicious country can VPN into the internet of a friendly nation, track down all critics on their social media, and destroy their lives in a real world attack (physical or virtual). We are only beginning to see this in Ukraine - are we delusional enough to believe that the world is past warfare? For the first time, anyone in the world could make nudes of women and share them online, from a location where they'll probably never be taken down. If a Russian company offered nudes as a service to American customers with cryptocurrency payments and a slick website that went viral, do you think tolerance is a winning political position?
Only issues I had to deal with are when someone finds some slow endpoint, and manages to overload the server with it, and my go to approach is to optimize it to max <10-20ms response time, while blocking the source of traffic if it keeps being too annoying after optimization.
And this happened like 2-3 times over 20 years of hosting the eshop.
Much better than exposing users to CF or likes of it.
You're right that Cloudflare has written many high-quality blog posts on the workings of the Internet, and the inner workings at Cloudflare. Amusingly, they even at times criticize HTTPS interception (not their use of it) and offer a tool to detect: https://blog.cloudflare.com/monsters-in-the-middleboxes/
I still believe that this information should be displayed to the relevant user configuring the service.
There are many types of proxies, and MITM decryption is not an inherent part of a proxy. The linked page from the Admin Panel is https://developers.cloudflare.com/dns/manage-dns-records/ref... and links to pages like "How Cloudflare works" (https://developers.cloudflare.com/fundamentals/concepts/how-...) which still do not mention HTTPS interception. It sounds like you found a link I didn't. In the past someone argued that I should've looked here: https://developers.cloudflare.com/data-localization/faq/#are...
But if you look closer, those are docs for the Data Localization Suite, an Enterprise-only paid addon.
I think a decent idea is, we need to bring personal accountability back into the equation. That's how an open-trust network works, and we know that, because that's how society works. You don't "trust" that someone walking by your car won't take a shit in your open window: they could. But there are consequences for that. We need rock solid data security policies that apply to anyone who does business, hosts content, handles user data online, and people need to use their actual names, actual addresses, actual phone numbers, etc. etc. in order to interact with it. I get that there are many boons to be had with the anonymity the Internet offers, but it also enables all of the horseshit we all hate. A spammer can spam explicitly because their ISP doesn't care that they do, email servers don't have their actual information, and in the odd event they are caught and are penalized, it's fucking trivial to circumvent it. Buy a new AWS instance, run a script to setup your spam box, upload your database of potential victims, and boom, you're off.
A lot of tech is already drifting this way. What is HTTPS at it's core if not a way to verify you are visiting the real Chase.com? How many social networking sites now demand all kinds of information, up to and including a photo of your driver's license? Why are we basically forbidden now by good practice from opening links in texts and emails? Because too many people online are anonymous, can't be trusted, and are acting maliciously. Imagine how much BETTER the Internet would be if when you fucked around, you could be banned entirely? No more ban evasion, ever.
I get that this is a controversial opinion, but fundamentally, I don't think the Internet can function for much longer while being this free. It's too free, and we have too many opportunistic assholes in it for it to remain so.
For example even Cloudflare hasn't configure their official blog's RSS feed properly. My feed reader (running in a DigitalOcean datacenter) hasn't been able to access it since 2021 (403 every time even though backed off to checking weekly). This is a cachable endpoint with public data intended for robots. If they can't configure their own product correctly for their official blog how can they expect other sites to?
Fundamentally it's adversarial, so expecting a single simple concept to properly cover even half of the problematic requests is unrealistic.
The problem with this setup, is that it sacrifices on both security (because it needs to keep false positives at a minimum, even if that means allowing some known bots) and user experience (because situations like the one you have will occur from time to time). When you enable a challenge page on CF, it will work as-is and you have no ruling over it, the most you can do is skip the page for the browsers having false positives.
If CF gave site owners a clearer view of what they are blocking and let them choose which rules to enforce (within the challenge page), it would be much easier to simply say that the customer running CF doesn't want you visiting their page/doesn't care about few false positives.
Maybe there should be some better defaults if they can't even use their own product correctly.
BTW a work around for this is to proxy the feed via https://feedburner.google.com/ which seems to be whitelisted by Cloudflare.
this is wrong.
if someone can use your site they can use stolen cards, and bots doing this will not be stopped by them.
cloudflare only raises the cost of doing it, it may make scrapping a million of product pages unprofitable but that doesn't apply to cc fraud yet.
I used to work at a company that did auto inspections. (e.x. if you turned a lease in, did a trade in on a used car, private party, etc.)
Because of that, we had a server that contained 'condition reports', as well as the images that went through those condition reports.
Mind you, sometimes condition reports had to be revised. Maybe a photo was bad, maybe the photos were in the wrong order, etc.
It was a perfect storm:
- The Image caching was all inmem
- If an image didn't exist, the server would error with a 500
- IIS was set up such that too many errors caused a recycle
- Some scraper was working off a dataset (that ironically was 'corrected' in an hour or so) but contained an image that did not exist.
- The scraper, instead of eventually 'moving on' would keep retrying the URL.
It was the only time that org had an 'anyone who thinks they can help solve please attend' meeting at the IT level.
> and you would not believe the extent that grocers go to to make price comparison difficult. This thing doesn't make thousands or even hundreds of requests - maybe a few dozen over the course of a day.
Very true. I'm reminded of Oren Eini's tale of building an app to compare grocery prices in Israel, where apparently mandated supermarket chains to publish prices [0]. On top of even the government mandate for data sharing appearing to hit the wrong over/under for formatting, There's the constant issue of 'incomparabilities'.
And it's weird, because it immediately triggered memories of how 20-ish years ago, one of the most accessible Best Buy's was across the street from a Circuit City, but good luck price matching because the stores all happened to sell barely different laptops/desktops (e.x. up the storage but use a lower grade CPU) so that nobody really had to price match.
[0] - https://ayende.com/blog/170978/the-business-process-of-compa...
So instead of banning America, report the IP addresses to their American hosts for spam and malicious intent. If the host refuses to do anything, report it to law enforcement. If law enforcement doesn't do anything... then you're proving my point.
Some things that I had found helpful when working with Gitlab is to add ".patch" on the end of commit URLs, and changing "blob" to "raw" in file URLs. (This works on GitHub as well.) It is also possible to use API, and sometimes the data can be found within the HTML the server sends to you without needing any additional requests (this seems to work on GitHub more reliably than on Gitlab though).
You could also clone the repository into your own computer in order to see the files (and then use the git command line to send any changes you make to the server), but that does not include issue tracker etc, and you might not want all of the files anyways, if the repository has a lot of files.
The fact that its regressed and repeated so many times now that clearly it indicates a trend and pattern of abuse by malicious intention. Change management isn't hard, unit tests are not hard, consistently breaking only certain browsers seems targeted.
Notably there are mainstream browsers that have this problem as well. Mozilla Firefox for example. Their Challenge has broken large swathes of the web many times to the point where companies hosting apps and websites have simply said they will not support any browser other than Google Chrome/Edge.
Anytime the market gets sieved and pushed to only one single solution, its because someone is doing it for their benefit to everyone elses loss.
Cloudflare should be broken up in antitrust as a monopoly, as should Google.
A cheeky response is "their profit margins", but I don't think that quite right considering that their earnings per share is $-0.28.
I've not looked into Cloudflare much, I've never needed their services, so I'm not totally sure on what all their revenue streams are. I have heard that small websites are not paying much if anything at all [1]. With that preface out of the way–I think that we see challenges on sites that perhaps don't need them as a form of advertising, to ensure that their name is ever-present. Maybe they don't need this form of advertising, or maybe they do.
It's gonna get even worse. Walmart & Kroger are implementing digital price tags, so whatever you see on the website will probably (purposefully?) be out of date by the time you get to the store.
Stores don't want you to compare.
I would like to know if there are alternatives somewhere close to the same cost, where I don't need to use Cloudflare. I don't enjoy annoying customers, or even dealing with sales and marketing, but I have built lots of software where I get to control the technology, and can get a new website up and running in 3 hours, with a ton of built-in functionality. I've spent about 12 years reducing the amount of memory the Umbraco CMS uses, compared to normal installs, and I love that aspect of my career. If I could get my clients to pay more and not use Cloudflare, I would happily go that route, believe me!
I'd presumed it was just the VM they're heuristically detecting but sounds like some are experiencing issues on Linux in general.
If you are writing some kind of malicious crawler that doesn't care about rate-limiting, and wants to scan as many sites as possible for the most vulnerable to get a list together to hack, you will scan robots.txt because that is the file that tells robots NOT to index these pages. I never use a robots.txt for some kind of security through obscurity. I've only ever bothered with robots.txt to make SEO easier when you can control a virtual subdirectory of a site, to block things like repeated content with alternative layouts (to avoid duplicate content issues), or to get a section of a website to drop out of SERPs for discontinued sections of a site.
This is not relevant because Cloudflare will cache it so it never hits your origin. Unless they are adding random URL parameters (which you can teach Cloudflare to ignore but I don't think that should be a default configuration).
Again, I think you are correct with more sane defaults, but I don't know if you've ever dealt with a network admin or web administrator that hasn't dealt with server-side caching vs. browser caching, but it most definitely would end up with Cloudflare losing sales because people misunderstood how things work. Maybe I'm jaded, at 45, but I feel like most people don't even know to look at headers by default when they feel they hit a caching issue. I don't think it's based on age, I think it's based on being interested in the technology and wanting to learn all about it. Mostly developers that got into it for the love of technology, versus those that got into it because it was high paying and they understood Excel, or learned to build a simple website early in life, so everyone told them to get into software.
Half these imbeciles don't even change the user-agent from the scraper they downloaded off GitHub.
I employ lots of filtering so it's possible the data is skewed towards those that sneak through the sieve - but they've already been caught, so it's meaningless.
The attacker was using residential proxies and making about 8 requests before cycling to a new IP.
Challenges work much better since they use cookies or other metadata to establish a client is trusted then let requests pass. This stops bad clients at the first request but you need something more sophisticated than a webserver with basic rate limiting.
Even worse, I get the blanket "You have been blocked" message when I try to manually open the URL and solve the captcha.
[1] https://digitalapi.auspost.com.au/shipments-gateway/v1/watch...
And yea, I imagine dynamic pricing will make things even more complicated.
That being said, that's why this feature isn't built into the billion shopping list apps that are out there. Because it's a pain.
I scrape hundreds of cloudflare protected sites every 15 minutes, without ever having any issues, using a simple headless browser and mobile connection, meanwhile real users get interstitial pages.
It's almost like Cloudflare is deliberately showing the challenge to real users just to show that they exist and are doing "something".
So it's OK for them to do shitty things without explaining themselves because they "have access to more data than we do"? Big companies can be mysterious and non-transparent because they're big?
What a take!
It is sad that in this day and age, when you buy a car you need to sign a legal exclaimer that you understand it requires gasoline to run.
It stops "card testing" where someone has bought or stolen a large number of cards and need verify which are still good. The usual technique is to cycle through all the cards on a smaller site selling something cheap (a $3 ebook for example). The problem is that the high volume of fraud in a short time span will often get the merchant account or payment gateway account shut down, cutting off legitimate sales.
As a consumer, you should also be suspicious of a mysterious low value charge on your card because it could be the prelude to much larger charges.
I'm sure if this becomes more of an issue the market will provide for that.
Every other attempt at DDoS has been ineffective, has been form abuse and credential stuffing, has been generally amateurish enough to not take anything down.
I host (web, email, shells) lots of people including kids (young adults) who're learning about the Internet, about security, et cetera, who do dumb things like talk shit on irc. You'd think I'd've had more DDoS attacks than that rather famous one.
So when people assert with confidence that the Internet would fall over if companies like Cloudflare weren't there to "protect" them, I have to wonder how Cloudflare marketed so well that these people believe this BS with no experience. Sure, it could be something else, like someone running Wordpress with a default admin URL left open who makes a huge deal about how they're getting "hacked", but that wouldn't explain all the Cloudflare apologists.
Cloudflare wants to be a monopoly. They've shown they have no care in the world for marginalized people, whether they're people who don't live in a western country or people who simply prefer to not run mainstream OSes and browsers. They protect scammers because they make money from scammers. So why would people want to use them? That's a very good question.
This is all as it was intended.
First off.. Gee I wish we had all come together about a decade ago or so and found solutions for what was plainly coming and spelled out by my self and others.
Second before it happens.. Pale Moon is not "old and insecure" It is being mismanaged had has no vision or prospects for future expansion.. It is just whatever XUL they can keep working while chugging away at the modern web features..
Pale Moon is often TOO security patched btw, which have been regularly disclosed and specially noted in the release notes since I convinced Moonchild he should do that for exactly the kind of old and insecure falsehood.
Moonchild's issue as a developer is he will always choose the seemingly simplest path of least resistance and will blindly merge patches without actually testing them. Many security patches are only security patches and not just.. patches because Mozilla redefined the level of security they want their codebase to provide.. But all known Mozilla vulnerabilities and many that would only become vulnerable if surrounding code is changed are patched.. Pale Moon and UXP has become more secure over time and that is an objective fact when you consider the nature of privileged access within a XUL platform which has its own safegaurds as well that persist into Firefox today though less encountered.
Now no one hates that furry bastard more than me (and I challenge you to try) but I will never call out good work as anything other than good work. Besides, there are a MILLION other plainly visible faults with the Pale Moon project and its personnel and my past behavior without having to make stuff up or perpetuate a false mantra like "old and insecure".
Finally, isn't Cloudflare being very unfair to every project save the modern firefox rebuilds listed on thereisonlyxul.org? Like SeaMonkey? Why does seamonkey deserve any hate from anyone.. or systematic discrimination.. What have they ever done but try and have an internet application suite.. Why are they old and insecure despite being patched and progressing a patch queue for Mozilla patches just landed selectively to preserve the bulk of XUL functionality its users adore?
In conclusion, what will be the final cost and how many will burn for trying to going against it.. I know my fate for trying.. how many will join me knowing that?
I try to check the forum post and found out that I was blocked by https://forum.palemoon.org , e.g., https://offline.palemoon.org/blocked/index.html . Don't know and haven't visited this site before.
https://www.palemoon.org works though.
Cloudflare not supporting Pale Moon has no impact on the rest of us. Matter of fact today is the first time I'm hearing of this browser I will never end up using.
Also, Turnstile is definitely not a simple proof of work check, and performs browser fingerprinting and checks for web APIs. You can easily check this by changing your browser's user-agent at the header level and leave it as-is at the header level; this puts Turnstile into an infinite loop.
Looks like there’s a plugin for that https://chromewebstore.google.com/detail/user-agent-switcher...
Again, there are many forms of proxies and DDOS protection that do not rely on TLS interception, just as there are cars that do not rely on gasoline. Cloudflare has many less technical home users who use their service to avoid sharing their IP online, avoid DDOS, or access home resources. I do not think the average Internet user is familiar with these concepts. There are many examples of surprised users on subreddits like /r/homelab.
Also, how would their certificates work if they don’t see content?
This approach clearly blocks bots so it's not enough to say "just don't ever do things which have false positives" and it's a bit silly to say "just don't ever do the things which have false positives, but for my specific false positives only - leave the other methods please!"
I'm not sure this is a good is a good example. I believe a majority of Polish Jewish survivors were those who fled into parts of soviet union not occupied by nazis(some were sent to gulags but this was still much better chance to survive then those who stayed in Poland). Another large portion were in concentration camps and hadn't been killed yet. And I believe 60,000 or less are estimated to have hid in Poland through the war. It's unlikely many remained in their pre war identities and simply concealed their Jewishness and managed to survive.
well, for starters, if you're using cloudflare to block otherwise benign traffic, just because you're worried about some made... up....
> On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.
well damn, if you're using it because otherwise you'd be exposing your users to active credit card fraud... I guess the original suggestion to only ban traffic once you find it to be abusive, and then only by subnet, doesn't really apply for you.
I wanna suggest using this as an excuse to learn how not to be a twat (the direction cf is moving towards more and more), where for most sites 20% of the work will get you 80% of the results... but dealing with cc fraud, you're adversaries are already on the more advanced side, and that becomes a lot harder to prevent... rather than catch and stop after the fact.
Balancing the pervasive fear mongering with sensible rules is hard. Not because it's actually hard, but because that's the point of the FUD. To create the perception of a problem where there isn't one. With a few exceptions, a WAF doesn't provide meaningful benefits. It only serves to lower the number of log entries, it rarely ever reduces the actual risk.
Weren't they useful the last time around, when 'literally Hitler' totally murdered freedom of speech until Biden the hero restored it?
Not sure if this problem is common but; I would be pretty upset if I implemented Cloudflare and it started to inadvertently hurt my sales figures. I would hope the cost to retailers is trivial in this case, I guess the upside of blocking automated traffic can be quite great.
Just checked again and I'm still blocked on the website. Hopefully this kind of thing gets sorted out.
Anyway, no, this guessing game isn't the solution to stolen bank details, the solution is for the payment provider to authenticate the account holder beyond merely entering a public number, especially if they suddenly see a flood of transactions from this one merchant as you describe. They can decide to ask for a second factor: send the person an SMS/email, ask to generate an authenticator code, whatever it is they've got on file beyond your card/account number. Anything else is just guesswork
I host many webpages and this is exactly it. Anyone is welcome to use the websites I host. There is no CDN, your TLS session terminates at the endpoint (end to end encryption). May be a bit slower for the pages having static assets if you're coming from outside of Europe, but the pages are light anyway (no 2 MB JavaScript blobs)
Then self host from your connection at home, don't pay for the VPS :). That's what I've been doing for over a decade now and still never saw a (D)DoS attack
50 mbps has been enough to host various websites, including one site that allows several gigabytes of file upload unauthenticated for most of the time that I self host. Must say that 100 mbps is nicer though, even if not strictly necessary. Well, more is always nicer but returns really diminish after 100 (in 2025, for my use case). Probably it's different if you host videos, a Tor relay, etc. I'm just talking normal websites
So how is Cloudflare supposed to distinguish legitimate new visitors from new attack IPs if you can't?
Because it matches my experience as a cloudflare user perfectly if the answer were "they can't"
Needless to say I want to throttle every CF employee for screwing with my efforts to further enrich my life through legal means.
It works so well and is very secure. You get to the checkout page on a website, click a link. If you’re on your phone, it hotlinks to open your banking app. If you’re on desktop, it shows a QR code which does the same.
When your bank app opens, it says “would you like to make this €28 payment to Business X?” And you click either yes or no on the app. You never even need to enter a card in the website!
You can also send money to other people instantly the same way, so it’s perfect for something like buying a used item from someone else.
Plus the whole IBAN system which makes it all possible!
That's a weird question to ask to someone that went out of their way to describe a non-caching situation.
> Also, how would their certificates work if they don’t see content?
Can you be more specific? I'm not sure which feature you're asking about or how it uses certificates.
But the answer is likely "that feature isn't necessary to provide DDOS protection".
> Cloudflare wants to be a monopoly. They've shown they have no care in the world for marginalized people, whether they're people who don't live in a western country or people who simply prefer to not run mainstream OSes and browsers.
The internet is so much better like this! There is a 2010 lightweight mobile version of Google, and m.youtube with obviously cleaner and better UI and not a single ad (apparently it's not worth to show you ads if you still appear to be using iphone 6)
No VPN, just good privacy settings in my case.
Specifically, I use fail2ban to count the 404s and ban the IP temporarily when certain threshold is exceeded in a given time frame. Every time I check fail2ban stats it has hundreds of IPs blocked.
It seems excessive to not allow at least a single query in this situation.
I had the same with a newspaper which I subscribe to. They shouldn't be tracking me, and don't show adverts to subscribers. In this case I wrote to their support person, who told me not to block the tracking.
You block all VPN users then, and currently many countries have some kind of censorship, please don't do that. I use a personal VPN for over 5 years and that's annoying.
I understand the other side and captcha/POW captchas/additional checks is okay. But give people a choice to be private/non-censorable.
Enabling/disabling a VPN each minute to access the non-censored local site which blocks datacenters IPs, then bringing it back again for the general surfing is a bit of a hell.
BTW, how they should report it, if they are a small business/physical person without lawyers? Does US police have some kind of online hotline to report US criminals for foreigners or smth?
The mobile hotspot thing... I have to do that to do anything involving Okta.
For some frustrating reason my IPv4 address, which I pay extra to my ISP to have, has been blocklisted by Okta. A login flow failure in one of the apps work uses triggered my address getting banned indefinitely is my best guess. My works Okta admins don't really understand how to unblock me on their Okta tenancy, and Okta support just directs me back to my local admins (even though it's any okta-using org I'm banned from logging into).
I get that misuse/abuse detection has to do its thing but it's so frustrating when there's basically zero way of a legitimate user from an IP of undoing a ban. My only recourse is to do all my using of okta from another IP.... If I was a legit spammer I wouldn't think twice about switching to another IP from my big pool, probably.
There's also another reason, Cloudflare is under the CLOUD Act, can't be trusted to touch the PII of EU citizens for legal reasons or anyone for moral reasons.
VPN doesn't matter, i probably share IP with someone "flagged" via ISP.
Every site, that is except their cloudlfare dashboard.
Bad business, guys. You gotta find another way. Blocking IP addresses is o-ver.
This is going to lead two a two-class internet where new technologies will not emerge and big players will win because the gate the high is so absurdly high and random that people stop to invent.
Somehow, Safari passes it the first time. WTF?
What I mean is that it's better to give VPN users the choice to solve captchas instead of being banned completely.
Just turning off some features gets them just about there. It wouldn't take rearchitecting things. Those features being bundled by default means very little for the difficulty.
As for your case, I wonder if Okta is relying on an external service like IPQS to get a score, that could explain why they don't really have any control over it.
Many of the AI scrapers don't identify themselves, they live on AWS, Azure, Alibaba Cloud, and Tencent Cloud, so you can't really block them and rate limiting also have limited effect as they just jump to new IPs. As a site owner, you can't really contact AWS and ask them to terminate their customers service in order for you to recover.
In other words, knowing who someone is isn't strictly necessary, provided they have "skin the game" to encourage proper behavior.
YouTube is a perfect example. Using iCloud Private Relay can now frequently label you as a bot, which stops you from watching videos until you login.
I noticed another platform (wallapop, a kind of ebay/craigslist here in Spain) that does the same. It never works well in a browser, even in chrome. I think they're just trying to bully their users to their app, which has 30+ trackers in it.
Bandwidth hasn't been a limiting factor for years for me.
But generating dynamic pages can bring just enough load for it to get painful. Just this week I had to blacklist Meta's ridiculously overactive bot sending me more requests per second than all my real users do in an hour. Meta and ClaudeBot have been causing intermittent overloads for weeks now.
They now get 403s because I'm done trying to slow them down.
When the government really cares, it can put all its resources to solve any particular problem. Though obviously that comes at the cost of reassigning resources from other tasks. Sadly it's impossible to assign all resources to solve every problem all at once.
Interestingly enough I checked on another non-Private Relay device (it worked), disabled Private Relay, refreshed the page, which still blocked me, and it resulted in the ban instantly extending to my other non-Private Relay devices.
I presume some fingerprinting/evercookie was in place which led to a flagging/ban extension to my home IP.
I have no idea why.
I don't even know which attack vectors an integrity check for a browser could help against. Against infected clients? It is in any way evidently not effective.
Take that story for instance. Here's how that goes in the physical world, just to show how unbelievably ridiculous it is.
So you didn't get the job? What's your next step?
I'll stop by their office and keep people from entering the front doors by running around in front of them. That'll show those bastards.
I get that there might be some feeling of righteous justice that comes from removing these entries from your Nginx logs, but it also seems like there's a lot of self-induced stress that comes from monitoring failed Nginx and ssh logs.
It isn't like half the Pale Moon userbase ever wanted me there to begin with despite giving them not just an Add-ons Site and a developer wiki/doc site, the Pale Moon for Linux website, but a fully functional XUL platform that survives my involvement and a Pale Moon that is STILL Pale Moon when Moonchild as early as Pale Moon 27 was going to go the cyberfox route of Australis with CTR. So context of a decade of selfless unpaid work of 10-16 hour days every day, forum drama, bad decisions and behavior on my part in response to the response of my selfless work, and relentless attacks such as these no matter if I pop my head out or not?
If you want the full story of the end look it up on Kiwifarms (This was all before them being removed from clearnet so before the stuff you are thinking of) where I was maneuvered towards by 4chan anon people because that was the ONLY venue I had afterwards. For some reason they engaged then moved on.. Left me intact. I don't know why. But it is all there.. A cleaner version is codified in Interlink release notes on the internet archive. I encourage you to learn what actually happened and when and then make your judgement.. If you do that I will accept it even if I disagree with it because I disagree with a lot about my self these days.
Doesn't matter anyway. There are much larger issues now in the world than years old drama that still in the end.. Created the Unified XUL Platform (Take 2, the one that worked) and helps give hope to those otherwise subsumbed by the monoculture. Not that Pale Moon culture is much better but the fact it persists means more than one thing can. I can do better.. and so can we all.. Let's do that while we still can.
-nsITobin
The remaining percentage is still annoying, as it happens from the phone.
Maybe 10% of the time I make a purchase online, it shows me a screen where it says it's waiting for my bank to verify, I'll have to input a code or accept a notification or something.
A solid half the time it fails. Either the site decides the transaction was rejected before I even get a chance to respond (within seconds), or I just don't get any notification or code or anything, or I do authorize it and it still gets rejected.
What I'd worry about is Cloudflare using their knowledge of their VPN clients to allow services behind their attack protection to treat those clients better, because maybe they're leaking client info to the protected services.
Not that I think Cloudflare/Apple/etc. are supremely noble/honest/moral, or that it's good that semi-anonymous connections are treated so badly by default; this juxtaposition just doesn't seem like a problem to me.
EDIT: OK, I back off of this position somewhat. Apple's marketing of iCloud Relay might allow users to believe it's more prestigious and reputable than a VPN/Tor. They do have fine print explaining that you might be treated badly by the remote services, but it's, you know, fine print, and Apple knows that they have a reputation for class and legitimacy.
This cited version is the revised version. Moonchild has revised his version of events multiple times in the nine months after. Pfft that isn't even the latest version lol. There are many now hidden threads on the Pale Moon forum that also showed events as they happened or as told when they happened.. All gone now.. Some of them contradict the later retellings.. I simply refer to events as they happened at the time and the Interlink release notes summery there of.
Can't wait to see if it changes anything...
I would prefer the web was different, but it is not.
Yes, it is, both your TLS and TCP stacks are unique enough that such spoofing can be detected. But there are a lot of other things that can be fingerprinted as well.
I need to disable it for one of my internal networks (because I have DNS overrides that go to 192.168.0.x), or I’d wish they’d just make it mandatory for iPhones and put and end to such shenanigans.
Apple could make it a bit more configurable for power users, and then flip the “always on” nuclear option switch.
Either that, or they could add a “workaround oppressive regimes” toggle that’d probably be disabled in China, but hey, I’m in the US, so whatever.
Edit: I also agree that blocking / geolocating IP addresses is a big anti-pattern these days. Many ISPs use CGNAT. For instance, all starlink traffic from the south half of the west coast appears to come from LA.
As a result, some apps have started hell-banning my phone every time I drive to work because they see me teleport hundreds of miles in 10 minutes every morning. (And both of my two IPs probably have 100’s of concurrent users at any given time. I’m sure some of them are doing something naughty).
Cloudflare cuts down on the noise, but also helps does the work of preventing scrapers, people who re-sell your site wholesale, and cutting down on the noise also means cutting down on the cost of network requests.
It also can help where security is lax. You should have measures against credential stuffing, but if you don't, Cloudflare might prevent (some) of your users from being hacked. Which isn't good enough, but is better than no mitigation at all.
I don't use Cloudflare personally, but I won't dismiss it wholesale. I understand why people use it.
a professional would explain how the vendor is being lazy and making a mistake there because they don't understand your business.
depending on the flavor of security professional (hacker) they might also subtly suggest that this vendor is dumb and should be embarrassed they've made this mistake, thus creating the implication that if you still want to block these users you would also have to be an idiot
under so circumstance is what I ever allow anyone to get the mistaken impression that some vendor understands my job better than I do. As a "security professional" it's literally your job to identify hostile traffic, better than a vendor could.
no, it's still the front line. And likely always will be. It's the only client identifier bots can't lie about. (or nearly the only)
At $OLDJOB, ASN reputation was the single best predictor of traffic hostility. We were usually smart enough to know which we can, or can't block outright. But it's an insane take to say network based blocking is over... especially on a thread about some vendor blocking benign users because of the user-agent.
combing and coming through searches and reddit all comes up with non-working siri shortcuts that complain that the url is not found.
That sounds like the solution, that sounds like good security.
But as a simple Pale Moon user, I can say you caused a big enough disruption to the project that even a user who doesn't really pay attention also noticed.
Now you're here again sidetracking the subject at hand with past dramallamas and seemingly getting your pantaloons moist at having stories besides your own also provided. No thanks.
The problem is that all these Cloudflare forensics-based throttling and blocking efforts don't hurt sales figures.
The number of legitimate users running Arc is a rounding error. Arc browser users often come to Cloudflare without third-party tracking and without cookies, which is weird and therefore suspicious - you look an awful lot like a freshly instantiated headless browser, in contrast to the vast majority of legitimate users who are carrying around a ton of tracking data. And by blocking cookies and ads, you wouldn't even be attributable in most of the stats if they did let you in.
It would be like kicking anyone wearing dark sunglasses out of a physical store: sure, burglars are likely to want to hide their eyes. Retail shrink is something like 1.5% of inventory, while blind users are <0.5% of the population. It would violate the ADA (and basic ethics) to prohibit out all blind shoppers, so in the real world we've decided that it's not legal to discriminate on this basis even if it would be a net positive for your financials.
The web is a nearly unregulated open ocean, Cloudflare can effectively block anyone for any reason and they don't have much incentive to show compassion to legitimate users that end up as bycatch in their trawl nets.
https://support.apple.com/en-us/102602
"As mentioned above, Cloudflare functions as a second relay in the iCloud Private Relay system. We’re well suited to the task — Cloudflare operates one of the largest, fastest networks in the world. Our infrastructure makes sure traffic reaches every network in the world quickly and reliably, no matter where in the world a user is connecting from."
"Bad guys" using Private Relay is one reason these IPs get blocked: one abuser can cause an entire block of people to get flagged as a single malicious user; and a big enough group of users can also look like a single malicious user to many blocklisting strategies, because they all share the same IP.
My experience form 15 years working in the hosting industry is that volumetric attacks are extremely rare but customers that turn to Cloudflare as a solution are more often than not DDOS-ing them self because of bad configured systems, but their junior developers lack any networking troubleshooting skills.
Consider messaging the owner to tell them you were trying to buy a product on their site and the site wouldn't let you. There's a chance that they'll care and be able to do something about it. But no chance if they don't know about the problem!
[0] https://en.wikipedia.org/wiki/List_of_assassinations_by_the_...
[1] https://en.wikipedia.org/wiki/List_of_worker_deaths_in_Unite...
I have relatively fast internet, so maybe it's fast enough to absorb a lot of the problems, but I've had good enough luck with some basic Nginx settings and fail2ban.
[1] a small little mini gaming PC running NixOS.
So when I successfully solve a captcha, that doesn't make me 100% trusted not-a-scraping-bot. Instead it's an input into a statistical model, along with all the other identifying information they can hoover up, and that statistical model may still say no.
I guess you get some security since each party that you transfer to must have their identity verified with a bank, so you could always get the police involved fairly easily
The iDeal website page on security [2] is in Dutch, but it translates to roughly:
> Before you make a purchase, make sure that the webshop or business is a reliable party. For example, you can read experiences of other consumers about webshops on comparison sites. Or you can use a Google search to check what is said (in reviews) about a webshop on the internet. Also check the overview of the police with known rogue trading parties and the page check seller data. Before making a purchase, always use the following rule of thumb: if something is too good to be true, don't do it.
That's not the case, that ua is Chrome on MacOS. The rest is backward compatibility garbage
They really do, actually. The fine print on their page only states:
iCloud Private Relay is not available in all countries or regions. Without access to your IP address, some websites may require extra steps to sign in or access content.
And they have documentation linked on that same page for website owners: https://developer.apple.com/icloud/prepare-your-network-for-... which even goes a step further and encourages website operators to use Privacy Pass to allow iCloud Private Relay users skip CAPTCHA challenges.
And really, this checks out, because iCloud Private Relay has a unique combination of circumstances compared to other commercial VPN users and Tor because:
* It isn't explicitly designed as a bypass tool of any form like commercial VPN's, your options for IP location are "same general location" or "same country and time zone" - content providers have no reason to block it for allowing out of region access
* Private relay is backed by iCloud authentication of both the device and the user, you can be beyond reasonably sure that traffic coming from an iCloud Private Relay endpoint is a paying iCloud+ user, browsing with safari, using their iPhone/iPad/Mac.
* It is backed by one of the most recognizable brands in the world, with a user base who is more likely to send you nasty messages for blocking this service.
On particular note of the last one, there's no "exception list" or anything available for end-users in Safari to bypass Private Relay for specific sites. My work one day decided to add the entire "Anonymizers" category to the blocklist in Okta, and I was suddenly unable to access any work applications on my iPhone which is enrolled in our enterprise MDM solution because I have Private Relay enabled. Enough people complained that the change was rolled back the same day it was implemented, because the solution was "turn it off" and that was unacceptable to many of our users.
Some people like me who block Service Worker API all the time are also affected, like https://chromewebstore.google.com/detail/no-service-worker/m... this.
Cloudflare staff have a real hateboner for non-mainstream browsers and tell yourself what you like but I still actually care about the well being of this unmitigated disaster we call the Internet and the "Open Web".. Not to mention the lives and well being of users and contributors out there in the world..
Go be small minded on the Pale Moon forum, I'm busy.
It sometimes blocks me on fairly major browsers, such as google chrome ( but on an older Ubuntu ).
That in mind, I'd love even the most fawning of the fanbois to come up with rationalization for why for a very common browser (Safari on modern macOS), most links through Cloudflare work, but trying to get past the are-you-human checkbox on Cloudflare's abuse reporting page doesn't work half the time.
Obviously that shouldn't be on an abuse reporting page at all, but Cloudflare has been making abuse reporting extremely difficult for years. Adding rate limiting (a human can easily hit it) and prove-you're-human verification on their abuse page just unambiguously proves this.
But in the US there are so many credit card providers, each one seems to do it differently, and the UX flows just break. And it seems difficult for a site to even test, and how will you even figure out if it's the provider or network or merchant or notification that's failing?
https://www.aclu.org/news/racial-justice/trumps-executive-or...
Right there in the executive orders. They're literally rolling back accessibility and making this a policy.
Read the EO yourself.
https://www.whitehouse.gov/presidential-actions/2025/01/endi...
Did you read the executive order? It's not the left calling it DEIA. Its Trump.
> Sec. 2. Implementation. (a) The Director of the Office of Management and Budget (OMB), assisted by the Attorney General and the Director of the Office of Personnel Management (OPM), shall coordinate the termination of all discriminatory programs, including illegal DEI and “diversity, equity, inclusion, and accessibility” (DEIA) mandates, policies, programs, preferences, and activities in the Federal Government, under whatever name they appear.
https://www.whitehouse.gov/presidential-actions/2025/01/endi...
It's unfortunate yes but that's what drives the threat signatures
Demographic is important here. If I was running a shop that sold software for Linux users, sure. If I'm running a store that sells pretty much anything else? I'm not caring.
Which itself shifted from complaining that you aren’t warned that coffee is hot, to - after implicitly agreeing that it should be obvious it’s hot - complaining that it they didn’t have to make it as hot.
Great! Offer an alternative! Everyone would be more than happy.
I love how so many of these apologists talk about stuff like "maintenance costs", as though it's impossible to write code that's clean and works consistently across platforms / browsers. "Oh, no! Who'll think of the profits?!?"
If you had any technical knowledge, you'd know that "maintenance costs" are only a thing when you code shittily or intentionally target specific cases. A well written, cross-browser, cross-platform CAPTCHA shouldn't have so many browser specific edge cases that it needs constant "maintenance".
In other words, imagine you're arguing that a web page with a picture doesn't load on a browser because nobody bothered to test with that browser. Now imagine you're making the case for that browser being so obscure that nobody would expend the time and money. Instead, why aren't you pondering why any web site with a picture wouldn't be general enough to just work? What does that say about your agenda, and about the fact that you want to make excuses for this huge, striving-to-be-a-monopoly, for-profit company?
Well, for starters it's not so absolute:
> it's doing nothing to help the disabled
It's obviously doing something for the disabled. Reserved disabled parking spots and wheelchair-accessible building entrances are requirements of the ADA. It seems reasonable to think it "improves people's lives". A whole bunch of contrary opinions are not necessarily reasons for disagreement as much as they are simply disagreement.
It's like talking about getting murdered - it happens, and there are statistics, but if you're literally expecting everyone to change their whole lives based on the fact that some people are murdered, with zero consideration for the where, why and how, you're doing it wrong.
Bots are a fact of life. Secure your site properly, follow good practices, set up notifications for important things, log stuff, but don't look at the logs unless you have a reason to look at the logs.
Having run web servers forever, this is simply normal. What's not normal is blindly trusting a megacorporation to make my logs quiet. What're they doing? Who are they blocking? What guidelines do they use? Nobody, except them, knows.
It's why I self-host email. Sure, you might feel safe because most people use Gmail or Outlook, and therefore if there are problems, you can point the finger at them, but what if you want to discuss spam? Or have technical discussions about Trojans and viruses? Or you need to be 100% absolutely certain that email related to specific events is delivered, with no exceptions? You can't do that with Gmail / Outlook, because they have filters that you can't see and you can't control.
Sure. If there was another place to buy a better door at. But if that door manufacturer's the only one that makes doors, if the door installer and door technicians all tell you that they can't or won't make another door for you, then you just deal. Maybe crank up the prices a bit to try to mitigate your 10% shortfalls.
The place where a business looks at that problem and sees money being left on the table that it can't live without and that it has no other way of making up for... that is a very narrow stretch, and only very marginal businesses live there.
User Agents look the way they do because this is a recurring issue.
A browser without network effects gets blocked, they look for a way to bypass the blocking, then they become mainstream and now the de-facto UA is larger than before.
*I have not tried downloading Google Chrome or IE or Edge if that still exists for linux
I'd think that a non-standard browser also strongly suggests that they're a financially-comfortable middle-class individual, and quite possibly a whale with FAANG income.
https://www.justice.gov/archives/jmd/diversity-equity-inclus...
We all need to pay for it, not pass feel good legislation that shoves it down the throats of sole proprieter LLCs.
For local physical store, geo-location is a naturally filter for customers as long as beaming a person from a spaceship to earth is not invented. For web, a equally effective solution is very hard to find.
That is a much much easier to reach bar.
It's like if a restaurant sells cheeseburgers, and I want a hamburger. "How do they figure out ~~what~to~cache~~ the cheese to ketchup ratio without adding cheese?" They can just skip that part. I'm not asking for sushi and supporting that by saying "sushi is possible".
There are several colliding problems there (cheap cell phone plan, 2fa being via text, online purchases requiring 2fa) but it still illustrates to me the pain of doing simple stuff in the modern tech space. I wish the powers that be would work harder on solutions that don't require extra work from the people doing small, normal stuff. It would be better to have a lot more fraud occur but a lot more of the perpetrators pursued and caught. A lot of anti-fraud measures seem to be largely about passing the buck to someone else instead of actually eliminating the humans who are driving the fraud.
There's some truth in this, but I think there is a lot of room for improving things as far as making life much more painful for opportunistic assholes in general.
They do have access to them. The lead developer and project owner has sec bug access in bugzilla.
But vulnerabilities in newer Mozilla have over time become less and less relevant in Pale Moon's codebase, which led to the latter dropping the tracking of how many Mozilla security patches have been applied in the release notes (starting with 33.0.1).
I don't get your reasoning here, you shouldn't even expect more than a fraction of the reddit users to have even installed and tried the browser, let alone using it regularly.
In the USA, I think it would be worth trying to sue Cloudflare for either "free speech" or "public nuisance" violations. Gonna reach out to the ACLU and EFF in the coming days.
there are many others. just buy a book for industries that value privacy or pay someone.
The EO is using the language of the programs to ensure that they're shut down.
Accessibility has been around forever. One of the major proponents of it was a Republican nominee for President. It has broad bipartisan support.
DEI has been around for 45 minutes and is racism disguised as anti-racism.
And that order is messing with disability programs and other accessibility issues. Directly.
"Google is adding code to Chrome that will send tamper-proof information about your operating system and other software, and share it with websites. Google says this will reduce ad fraud. In practice, it reduces your control over your own computer, and is likely to mean that some websites will block access for everyone who's not using an "approved" operating system and browser."
https://www.eff.org/deeplinks/2023/08/your-computer-should-s...
And use your brain for a hot second will you? Bad actors don't use a rare user agent, they use the same Chrome user agent that everyone else uses.
Strict fraud could be handled, but everything above is really different per jurisdiction by obvious reasons. There is nothing clearly good or bad in bots, or e.g. pirates, it depends on particular cultural perception. And if one nation thinks that the action is not a crime, it doesn't make sense to them to prosecute such actions for foreign requests.
> Sec. 2. Implementation. (a) The Director of the Office of Management and Budget (OMB), assisted by the Attorney General and the Director of the Office of Personnel Management (OPM), shall coordinate the termination of all discriminatory programs, including illegal DEI and “diversity, equity, inclusion, and accessibility” (DEIA) mandates, policies, programs, preferences, and activities in the Federal Government, under whatever name they appear.
IMO this is a crystal clear example of why you don't lump unrelated programs in together. You lump accessibility with DEI because accessibility is largely favored and DEI is largely not. Their hands are likely tied by the text of this EO because the previous administration didn't keep DEI separate from accessibility. As I stated elsewhere accessibility is a decades-old cause while DEI has been around barely the past couple years in government circles and wider press.
If the previous administration had left them separated and stopped hamfisting DEI into DEIA I don't think this OE would have mentioned accessibility at all. But since it does, if you're a federal employee you don't really have a choice unless you want to try to make the argument that accessibility on its own is not DEIA and therefore it can stay but that's likely a losing battle.
Why not adwall the user instead, showing only ads until they upgrade the device or buy premium?
A sentiment I cannot agree with more.
Cloudflare DOES want to be in the business of saying one browser is more legitimate than another.
It would also be trivial for google and facebook to turn off all ads and logging of your activity. They would need to do strictly less than they do now. It would benefit all users too!
In CF case they would have to build a completely different infrastructure to detect bots using different technology to what they have now, including different ways around false positives for legitimate users. While perhaps nothing new in the sense that you claim “this is possible”, i see no one else offering this mythical “possible” product.
I would be the first in line to your offering of free cheeseless hamburgers. Where do i sign up?
My argument has never shifted.
But the reason the argument shifted was because someone specifically asked about how you'd do DDoS protection without those downsides.
And you continued asking how it could be done.
> It would also be trivial for google and facebook to turn off all ads and logging of your activity. They would need to do strictly less than they do now. It would benefit all users too!
Isn't cloudflare supposedly not tracking private information in the websites they proxy...? If you think they make money off it, that's pretty bad...
> In CF case they would have to build a completely different infrastructure to detect bots using different technology to what they have now, including different ways around false positives for legitimate users.
I disagree.
> I would be the first in line to your offering of free cheeseless hamburgers. Where do i sign up?
First you need to put me into a situation where my business can compete with cloudflare while doing exactly the same things they do. Then I will be happy to comply with that request.
The hard part of this situation is not the effect of that tiny change on profitability, it's getting into a position where I can make that change.
If I understand correctly, this is why I've said on previous Cloudflare threads that they've managed to design a game they can never win. They project a certain omniscience, but then all this sh*t happens. We need to persuade them to stop playing.
"You should use an up to date major browser. Old Firefox forks are not supported and expected to have problems."
It's all incredibly telling, that they've given up trying to be impartial. When "they" start picking browser winners and losers, are OS's next?
In a way Cloudflare missed an opportunity, because a try()/catch() around the bit of failing JavaScript would have been perfect fingerprinting. Having said that, I don't expect it will take the Pale Moon team very long to patch the problem.
But where to go from here? Is there anybody besides the ACLU and EFF with enough resources to mount a "public nuisance" lawsuit? And what would constitute winning? A court-appointed overseer to make sure Cloudflare is regularly educating its staff on the variety of browsers in use today, and providing near 24–hour turnaround times when issues like this occur? It would be a start.
Personally I wonder if this whole style of security is a fool's errand and any blocking should be server-based and look at behavior, not at arbitrary support of this or that feature. I think it would also be helpful if anybody who finds themselves blocked would be given at least a sliver of why they were blocked, so they could try rectifying the problem with their ISP (bad IP), some blocklist, etc.
They are at the very least tracking the users and using that tracking as part of the heuristics they use in their product.
Whether they sell the data for marketing, i don’t know, hopefully not but conceivably, yes.
To which, > I disagree.
Yes, we’ve established that you disagree and explicitly claim “it’s possible to offer ddos protection without mitm”
and now further that “dropping the extra feature of caching” would not adversely affect their technology or their business”
Great, claims though entirely unsupported and in the latter case obviously false if you know anything about how it works.
In particular, they would need to sponsor the free accounts via much poorer economies of scale due to not being able to cache anything, and would not help at all with a “legitimate ddos” such as being on the front page here
I work on a "pretty large" site (was on the alexa top 10k sites, back when that was a thing), and we see about 1500 requests per second. That's well over 10k concurrent users.
Adding 10k requests per second would almost certainly require a human to respond in some fashion.
Each IP making one request per second is low enough that if we banned IPs which exceeded it, we'd be blocking home users who opened a couple of tabs at once. However, since eg universities / hospitals / big corporations typically use a single egress IP for an entire facility, we actually need the thresholds to be more like 100 requests per second to avoid blocking real users.
10k IP addresses making 100 requests per second (1 million req/s) would overwhelm all but the highest-scale systems.
They can do that without seeing the proxied contents. So your analogy to asking facebook or google to stop ads and tracking is completely broken.
> and now further that “dropping the extra feature of caching” would not adversely affect their technology or their business”
Yes. (Well, it was stated much earlier but I guess you didn't notice until now?) You're the one saying it would be a problem, do you have anything to back that up?
> in the latter case obviously false if you know anything about how it works.
Caching costs a bunch of resources and still uses lots of bandwidth, what's so obvious about it? And cloudflare users can already cache-bust at will, so it's not exactly something they're worried about.
https://developers.cloudflare.com/cache/how-to/cache-rules/s...
> would not help at all with a “legitimate ddos” such as being on the front page here
Which is not the scenario people were worrying about.
And an average web server can handle that.
Combat fraud first so you can start to really identify the other more troublesome troublemakers.
Bots? Declare the owner. Lie about the owner? Fraud.
Crawling? Bots.
Intellectual property? That's an entire whole other industry.
Trump signed the order like that. If he wanted to change the order, he would have written it differently.
In any case, President Elon is pissed at accessibility folks harassing him over Twitter firings (including the firing of Twitters accessibility teams). This is stuff well within their politics and is 100% what they want.
For a random site from the internet, sure, because a random blog is probably too small to be noticed.
Forums, even relatively niche ones, unfortunately do suffer DDoS from their disgruntled users. (Or competitors of the same fandom. Or from the disgruntled part of a rivaling fandom.)
> It's like talking about getting murdered - it happens, and there are statistics, but if you're literally expecting everyone to change their whole lives based on the fact that some people are murdered, with zero consideration for the where, why and how, you're doing it wrong.
All analogies fail somewhere, but this is probably one of those which easily falls apart. Injuries are probably better. In a random population, there are a relatively small proportion of injuries, but some jobs (like construction) tend to have a significantly higher number of injuries compared to a mean person, in the same manner that a DDoS on a random website is unlikely but certain types of websites are DDoS magnets.
Why does it need web workers, when it worked fined without them on Waterfox Classic firefox 56 fork that hasn't been updated in water?
> President Elon
Oh I'm sorry I was under the mistaken impression you were trying to have a good faith discussion about the merits of what's happening.
The federal government is comprised of millions of unelected bureaucrats (I don't mean that pejoratively that's literally what they are). There is nothing particularly earth shattering about what Elon is doing. He's given a task by the president and he's carrying it out, which is what every single unelected executive branch employee does at one level or another.
This is the internet and everybody is a field expert the moment they want to win an argument, best of luck with that.