[1] - https://github.com/arkenfox/user.js
[2] - https://github.com/yokoffing/Betterfox
[3a] - https://addons.mozilla.org/en-US/firefox/addon/css-exfil-pro...
[3b] - https://www.mike-gualtieri.com/css-exfil-vulnerability-teste...
The current state of browser-fingerprinting is off-the-rails, where they deny service if they don't get those fingerprints, and the browser to a lesser degree has had its securities/privacy protections gradually degraded.
Stock Firefox will not be able to provide any sufficient guarantees. There are patches that need to be re-compiled in, because there have been about:config options removed.
I highly suggest you review Arkenfox's work, most of the hardening feature he recommends will provide a better defense than nothing. He regularly also contributes to the Mullvad browser which implements most of his hardening and then some but also has some differentiation from the Tor Browser, but many of the same protections.
The TL;DR of the problemscope is that there are artifacts that must be randomized within a certain range. There are also artifacts that must be non-distinct so as to not provide entropy for identification (system fonts and such that are shared among many people in a cohort).
JS, and several other components, if its active will negate a lot of the defenses that have been developed to-date.
Additionally, it seems that in some regional localities Eclipse attacks may be happening (multi-path transparent MITM), by terminating encryption early or through Raptor.
At a bare minimum, there seem to be some bad actors that have mixed themselves into the root pki pool. I've seen valid issued Google Trust certs floating around that were not authorized by the owner of the SAN being visited, and it was transparent and targeted to that blog, but its also happened with vendors (providing VOIP related telco services).
It seems Some ISPs may be doing this to collect sensitive data for surveillance capitalism or other unknown malign purposes. In either case TLS can't be trusted.
I thought if you disabled JS, then that would greatly narrow down which user on the internet you are, since very few people (in comparison to everyone else in the world) actually do this.
> not authorized by the owner of the SAN being visited
Source?
> TLS can't be trusted
Do you have more info on this? Why are more people not worried about it?
My understanding is that Privacy Badger no longer learns by default. I never wanted that, just block known things, like search engine click hijacks.
I’m not sure what to do about the user agent header. Changing or simplifying it tends to break sites. Also I’d like to promote Linux there but that’s at odds with privacy.
Its constant drain even when not 'in use' seems to imply it's classifying tabs as they change page (though it might be telemetry or uncommented testing). If so, it's an example of premature optimisation gone very wrong.
It's a shame, because it overshadows the fact that naming tab groups is a perfect use case for an LLM, alongside keyboard suggestions and reverse dictionaries [1]. I'm ardently distrustful of LLMs for many, many purposes, but for the tiny parameter and token usage needed it's hard to not like. Which is a shame it's (somehow) such a drain.
[0] https://github.com/mozilla-firefox/firefox/blob/7b42e629fdef... exports a SmartTabGroupingManager, though how or why that is used without being asked eludes me
[1] https://www.onelook.com/thesaurus/ Can be helpful in a pinch when a word's on the tip of your tongue, though its synonyms aren't always perfect.
Orion is Webkit-based, can install extensions from Chrome OR Firefox, privacy respecting, and a whole lotta niceties for per-website tweaks and other customizations.
Fun suggestion to switch to a browser that has a company behind it that has pulled a lot of shady stuff related to ads and tracking. A company where privacy is more marketing than a core value.
Edit: Since people are going to ask anyway, here is an article that covers a lot of the shady stuff brave pulled https://thelibre.news/no-really-dont-use-brave/
If you are one of those folks who don't care about the political arguments, feel free to skip over paragraph one and two. Paragraph three till ten cover actual shady stuff done by brave the company itself.
There is one more thing I can add to the list, though it wasn't as widely published about. At some point the team behind Brave decided to implement browser extension support from scratch and only support specific extensions. Which sounds okay in theory until you realize how they did so. Without involving the extension creator they would fork a version of the extension and bake that into Brave. They did so without informing the extension creator, meanwhile users would still go to the extension creator for support who couldn't fix a thing.
Every time one of these things come up, the Brave team either is irked (but changes it anyway) or goes "oh, yeah we'll remove it in the future". This to me indicates a company culture where there is no thinking ahead about the impact of features or where they simply don't care as long as they aren't called out on it.
This consistent pattern over a period of years has, to me anyway, shown that issues such as privacy or even being user centered are not a core part of their thinking but merely a marketing gimmick.
And to be ahead of the curve on some other things I have heard people say over this. Just that Mozilla sucks doesn't mean alternatives can't be worse.
It is a fundamentally cursed problem that has a lot of nuance.
You have buckets of people, and the entropy or difference between your collected artifacts and others must be sufficient to uniquely identify a single person, that is the point of fingerprinting. Your natural defense is in not sticking out of that group/crowd uniquely so others in the group may carry the same range of fingerprints.
At the same time, if you homogenize the artifacts to limit it down to a single fingerprint the sites will simply deny access.
Disabling JS altogether doesn't identify you aside from the fact that you are part of the overall group that has it disabled, the trade-off is that all the entropy JS would normally collect cannot be collected. So while they cannot identify you uniquely they can identify the group by denying that group, and that is the fundamental weakness of binary switches. Its a constant cat and mouse.
> not authorized by the owner of the SAN being visited. > Source?
Firsthand experience with a large VOIP provider where communications would fail intermittently but in targeted ways that avoid common test failures. Call tests would intermittently but routinely fail in the silent-fail domain of interrupt driven calling (where you wouldn't know a call was inbound), and the failures would occur only in that domain. The issues were narrowed down to a mismatch in certificates through a lengthy support correspondence where the hosted certificate vs what was being provided at the edge were different. The artifacts were compared manually through correspondence.
The certificate revocation was revoked within 48h once the vendor reached out to Google, but we've seen it happen twice now. The standards in general use don't have a means aside from revocation to handle bad-acting at the root-PKI level. Chain of trust issues like this have been known about for over 2 decades in the respective fields.
> Do you have any more info on this? Why are more people not worried about it?
On the specifics? The Princeton Raptor attack paper (2015) covers the details. Early termination of encryption, and traffic analysis are pretty bad.
Why more people aren't worried? I suppose its because most of the security industry (not all) has accepted the fact that device security is porous, and there isn't really much you can do to hold the manufacturer responsible or to make changes. Surveillance capitalism is also incentivized through profit motive to impose a state of complete and total dependency/compromise.
The state of security today, with your almost routine data breaches every quarter, is a direct consequence from lack of liability, accountability, and regulation, and honestly people in the overall media have stopped listening to many of the experts. They don't want to know how bad, bad is.
The breadth and depth of scale is enough to drive one a bit crazy when looking at the unvarnished reality, its such a complete departure from what is told that it becomes disbelief. The people are largely powerless to mitigate the issues as most of the market is silently nationalized in one form or another. Its no longer about the features people need, but about coercing the market where the only choice is what gets shoveled.
Do you suppose the average middle class worker has the headspace to worry about their county tracking their minute movements through suites of radio sensors (TPMS/OBD-2), or someone hacking into their car through the telematics unit while their driving and disabling the braking, or inducing race conditions related to safety-critical systems.
While we may not care domestically about many of these things when we are told, given our stance on free-speech, if your a critic of China; they might care, and no ones stopping them because the security deficits are almost equally imposed through inaction as they are through action.
Many of these uses are also no commonly disclosed; and manipulated rhetoric is jamming communication channels.
Cable modem security for instance requires a mandated backward compatibility to a 48bit RSA key (Cyphercon Talk), and while there are elevated security modes it boots in that mode, and pulls the config down remotely making it vulnerable to Eclipse.
Money-printing is largely what drives these incentives towards a dysfunctional market.
https://cyphercon.com/portfolio/exposing-the-threat-uncoveri...
UBO is enough; https://github.com/arkenfox/user.js/wiki/4.1-Extensions#-don...
If they are doing meaningful review, I question how much they actually get done in life.
Did you confirm with the owner that they were unauthorized?
And can you point to the certificates in the Certificate Transparency logs?
Some quick digging in the source suggests that it's simply not enabled by default in ESR 128. I don't know if that's because it's only enabled by default in a later release, or because it's disabled in all ESR releases; I suspect the former. Compare [1] and [2]:
-pref("browser.ml.enable", false); # in upstream/128.14.0esr
+pref("browser.ml.enable", true); # in upstream/142.0.1
The other pref, browser.ml.chat.enable[d] is not mentioned in that file at all.(edit: according to [3a] and [3b], it's browser.ml.enable and browser.ml.chat.enabled... yay for consistency, I guess)
[0] https://sources.debian.org/src/firefox-esr/128.14.0esr-1~deb...
[1] https://salsa.debian.org/mozilla-team/firefox/-/blame/upstre...
[2] https://salsa.debian.org/mozilla-team/firefox/-/blame/upstre...
[3a] https://salsa.debian.org/mozilla-team/firefox/-/blame/esr128...
[3b] https://salsa.debian.org/mozilla-team/firefox/-/blame/esr128...
I propose the below as various factors that can be larger:
Slower fan speed because of lower ambient temperature.
Different dark/light ratio and/or adaptive screen brightness.
Wifi spectrum congestion, variable power levels to maintain proper SNR.
Wifi/ethernet- broadcast packets.
The list goes on. Most of these are below a watt, but demonstrate the point that you've got a lot more variables than just one setting in a browser.
One profile for banks, a different profile for Amazon, a third profile for Google sites, a fourth for news sites I log into, a fifth for news sites I don't log into, a sixth that automatically forgets everything on exit for sites that UBO breaks.
Then delete all data on each profile periodically, weekly for news sites, monthly for Amazon and banking sites.
It's a giant pain in the ass juggling all these profiles. Seems like there should be a browser that automatically and transparently isolates every site in its own profile.
From what I've seen around people using the popular customized Firefox variants, like Floorp, Librewolf were surprised by this and not fond of the change.
A better test would be CreepJS in my opinion: https://abrahamjuliot.github.io/creepjs/
I'm not aware of any FOSS browser setup that can actually result in a random FP ID shown in creepjs on every page load (please prove me wrong).
Chrome didn't have anything other than a global JS on/off at first, so they clearly added this feature later.
No matter how effective this list is, the settings will either revert, change, or be silently undone.
New settings will alter the efficacy of the old ones.
Existing settings will disappear.
The behavior you hoped to configure changed to its opposite.
Remember: there was one morning when we all woke up and saw every dns query sent to cloudflare doh by default, and with no opt-in.
PaleMoon ( http://www.palemoon.org/ ) is a hard fork of Firefox, with a mix of old tech (XUL) and new tech (from current codebase of Gecko), that is another full-featured zero-telemetry browser that doesn't make any automated connections. But on this too, the full features of uBlock Origin isn't supported as it is based on the abandoned uBlock Origin (legacy) codebase (though the legacy codebase has been updated by some PaleMoon developers, the original developers of uBlock Origin do not wish to support PaleMoon as it doesn't support WebExtension.
Then there's the Tor Browser ( https://www.torproject.org/ ) - it is a soft fork of Firefox, that supports the Tor network and has been configured by default to be "privacy hardened" - it has none of the crap that Mozilla bundles into Firefox, like Pocket, AI, Ads etc. The Tor software bundled in it can be easily deleted, to use it as privacy hardened Firefox. However, there are two issues with it - it does make unauthorised and unwanted automated connections (to SecureDrop) and you can no longer remove the NoScript browser extension that is bundled in it (you could from previous versions). When a browser maker forcefully bundles something in it, (however useful it may be), and does not allow you to modify it, that's well-founded ground to be suspicious of it. (Note: I did finally figure out that one can stop automated phoning to SecureDrop, after disabling it in about:rulesets ).
As the tor browser laid a good foundation to create a privacy hardened Firefox, there are many other browsers that are Forks of the Tor browser - the Mullvad Browser ( https://mullvad.net/en/browser ) is a popular one, and Mullvad bundles its VPN service in it instead of the Tor network. Last I checked, it made some automated connections on startup, so I didn't bother to explore it further).
When trying to be similar to everyone else, even small changes to the browser, like changing window size, can make you easily identifiable from everyone else. Randomizing will allow you to modify your browser. None of the fingerprinting protections matter if you use your browser and session to login to some sites.
I use multiple browsers. One is for login to sites and tor-browser is for most of my browsing.
This is easily the best fingerprinting extension that I have found so far: https://jshelter.org/
Some may argue that the data that is included is a bit much for a "daily usage ping," an assertion that I won't dispute—but I will say that I appreciate the fact that Firefox even provides this level of transparency in the first place:
https://dictionary.telemetry.mozilla.org/apps/firefox_deskto...
In my tests only Tor was able to prevent that, but using Tor will give you bad rankings on payment sites like PayPal, you may even get banned there.
I learned this from here:
https://news.ycombinator.com/item?id=35243355
That site is now black, surely a coincidence. Here the archive.org link:
https://web.archive.org/web/20250801173508/https://www.bites...
Have a local copy.
No, this will not effectively help to reduce the fingerprint of your Browser.
A LOT more tracking services are integrated into the Firefox browser in various places (like New Tab page, Sync, Pocket, Shavar, Google Safebrowsing, OSCP, etc pp).
I wrote a more detailed article about this, and got an "as good as possible" as a result.
But yeah, please please start to use a Host Firewall where you can block on a per-domain and per-port and per-process basis (like LittleSnitch, OpenSnitch etc) to validate your assumptions. UIs will always lie to you, including the one from Firefox.
[1] https://cookie.engineer/weblog/articles/firefox-privacy-guid...
Thought experiment: in 100 years or even ten, can you imagine that there will not be tiny little camera robots that can get into the home of every person alive? Wouldn't every single living person be prone to having nude and unflattering, private moments leaked all over the internet?
Socially, if privacy is a construct, then so is the fallout we expect others and ourselves to feel when privacy is violated. To some extent, not all, this is self-inflicted Victorian thinking. To the extent that it's true, part of the answer is, in the words of the brave (lol) Michael Cohen, "So what?" Really, so what? I hope we can get to that kind of reaction to adults having their privacy upended because it just takes so much of the bite out of the problem, the shame that relatively innocent people would experience for something completely out of their control.
As far as the getting it back under control thing, we may also be coming to a point that more technologies are so dangerous or impactful that there becomes a need for more strict control so that powerful tech like miniaturization produces paper trails and the use of such technology comes with an implicit requirement for openness. I don't really care that people can use miniaturization, but I care if they can anonymize it to the extent that we create a lawless society with no remaining means of accountability.
What *will* Russia and North Korea do when it becomes plausible to unleash little robot assassins either in small numbers to target individuals or mass numbers to carry out what is essentially nuclear scale death without nuclear scale fallout and destruction? It is plausible that this is a new facet of WMDs and MAD-based deterrence.
Privacy, robots, and the inevitable slide into world war 3.
- hooks between network steps
- hooks between steps while rendering/interacting with a website
Things that I want to do but I can't: - catch a request and modify it, e.g. when a webpage tells my browser to visit ajax.googleapis.com/jquery.js then my browser SHOULD NOT DO IT. Seriously, just don't start running shit on my computer when I click something. Noone wants that, apart from Google. Not the users. I should be able to modify that request, and serve jquery from somewhere else.
- stop the browser's javascript execution
- run my own javascript (these two are currently unavailable together, if you don't allow javascript on a webpage, then you can't run your own) (or modify HTML/DOM in some other language)
I don't think Firefox is worth supporting, I believe it is a Trojan Horse of Google (or at least a Useful Idiot), and its existence is the main reason we have exactly 0 browsers (open source or proprietary) right now. It should die, so something else might flourish.I still use it honestly, but I'll need to move on at some point - not just because it's MV2-only, but also I've found a way in which uMatrix can be bypassed if a website were to specifically target it. (It doesn't affect uBlock Origin, although I haven't tested the Lite MV3 version.)