←back to thread

263 points amarder | 1 comments | | HN request time: 0.208s | source
Show context
amarder ◴[] No.45073992[source]
This checklist is a work in progress, would love to hear your feedback.
replies(8): >>45074343 #>>45076263 #>>45076344 #>>45077309 #>>45078077 #>>45079618 #>>45081867 #>>45083651 #
trod1234 ◴[] No.45076263[source]
This is quite a rudimentary checklist, and it won't provide much in terms of privacy protections, but it will break a number of sites.

The current state of browser-fingerprinting is off-the-rails, where they deny service if they don't get those fingerprints, and the browser to a lesser degree has had its securities/privacy protections gradually degraded.

Stock Firefox will not be able to provide any sufficient guarantees. There are patches that need to be re-compiled in, because there have been about:config options removed.

I highly suggest you review Arkenfox's work, most of the hardening feature he recommends will provide a better defense than nothing. He regularly also contributes to the Mullvad browser which implements most of his hardening and then some but also has some differentiation from the Tor Browser, but many of the same protections.

The TL;DR of the problemscope is that there are artifacts that must be randomized within a certain range. There are also artifacts that must be non-distinct so as to not provide entropy for identification (system fonts and such that are shared among many people in a cohort).

JS, and several other components, if its active will negate a lot of the defenses that have been developed to-date.

Additionally, it seems that in some regional localities Eclipse attacks may be happening (multi-path transparent MITM), by terminating encryption early or through Raptor.

At a bare minimum, there seem to be some bad actors that have mixed themselves into the root pki pool. I've seen valid issued Google Trust certs floating around that were not authorized by the owner of the SAN being visited, and it was transparent and targeted to that blog, but its also happened with vendors (providing VOIP related telco services).

It seems Some ISPs may be doing this to collect sensitive data for surveillance capitalism or other unknown malign purposes. In either case TLS can't be trusted.

replies(2): >>45076429 #>>45078020 #
ranger_danger ◴[] No.45076429[source]
> JS, and several other components, if its active will negate a lot of the defenses that have been developed to-date.

I thought if you disabled JS, then that would greatly narrow down which user on the internet you are, since very few people (in comparison to everyone else in the world) actually do this.

> not authorized by the owner of the SAN being visited

Source?

> TLS can't be trusted

Do you have more info on this? Why are more people not worried about it?

replies(1): >>45077575 #
1. trod1234 ◴[] No.45077575[source]
> I thought if you disabled JS, then that would greatly narrow down which user on the internet you are...

It is a fundamentally cursed problem that has a lot of nuance.

You have buckets of people, and the entropy or difference between your collected artifacts and others must be sufficient to uniquely identify a single person, that is the point of fingerprinting. Your natural defense is in not sticking out of that group/crowd uniquely so others in the group may carry the same range of fingerprints.

At the same time, if you homogenize the artifacts to limit it down to a single fingerprint the sites will simply deny access.

Disabling JS altogether doesn't identify you aside from the fact that you are part of the overall group that has it disabled, the trade-off is that all the entropy JS would normally collect cannot be collected. So while they cannot identify you uniquely they can identify the group by denying that group, and that is the fundamental weakness of binary switches. Its a constant cat and mouse.

> not authorized by the owner of the SAN being visited. > Source?

Firsthand experience with a large VOIP provider where communications would fail intermittently but in targeted ways that avoid common test failures. Call tests would intermittently but routinely fail in the silent-fail domain of interrupt driven calling (where you wouldn't know a call was inbound), and the failures would occur only in that domain. The issues were narrowed down to a mismatch in certificates through a lengthy support correspondence where the hosted certificate vs what was being provided at the edge were different. The artifacts were compared manually through correspondence.

The certificate revocation was revoked within 48h once the vendor reached out to Google, but we've seen it happen twice now. The standards in general use don't have a means aside from revocation to handle bad-acting at the root-PKI level. Chain of trust issues like this have been known about for over 2 decades in the respective fields.

> Do you have any more info on this? Why are more people not worried about it?

On the specifics? The Princeton Raptor attack paper (2015) covers the details. Early termination of encryption, and traffic analysis are pretty bad.

Why more people aren't worried? I suppose its because most of the security industry (not all) has accepted the fact that device security is porous, and there isn't really much you can do to hold the manufacturer responsible or to make changes. Surveillance capitalism is also incentivized through profit motive to impose a state of complete and total dependency/compromise.

The state of security today, with your almost routine data breaches every quarter, is a direct consequence from lack of liability, accountability, and regulation, and honestly people in the overall media have stopped listening to many of the experts. They don't want to know how bad, bad is.

The breadth and depth of scale is enough to drive one a bit crazy when looking at the unvarnished reality, its such a complete departure from what is told that it becomes disbelief. The people are largely powerless to mitigate the issues as most of the market is silently nationalized in one form or another. Its no longer about the features people need, but about coercing the market where the only choice is what gets shoveled.

Do you suppose the average middle class worker has the headspace to worry about their county tracking their minute movements through suites of radio sensors (TPMS/OBD-2), or someone hacking into their car through the telematics unit while their driving and disabling the braking, or inducing race conditions related to safety-critical systems.

While we may not care domestically about many of these things when we are told, given our stance on free-speech, if your a critic of China; they might care, and no ones stopping them because the security deficits are almost equally imposed through inaction as they are through action.

Many of these uses are also no commonly disclosed; and manipulated rhetoric is jamming communication channels.

Cable modem security for instance requires a mandated backward compatibility to a 48bit RSA key (Cyphercon Talk), and while there are elevated security modes it boots in that mode, and pulls the config down remotely making it vulnerable to Eclipse.

Money-printing is largely what drives these incentives towards a dysfunctional market.

https://cyphercon.com/portfolio/exposing-the-threat-uncoveri...

https://www.youtube.com/watch?v=_hk2DsCWGXs