Can't believe its been ten years.
Can't believe its been ten years.
TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.
It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.
It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.
The fundamental problem is a question of trust. There’s three ways:
* Well known validation authority (the public TLS model)
* TOFU (the default SSH model)
* Pre-distribute your public keys (the self-signed certificate model)
Are there any alternatives?
If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.
I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.
I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.
The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).
There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.
If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.
Biggest problem that Edward Snowden uncovered was - this stuff was happening and was happening en-mass FULLY AUTOMATED - it wasn't some kid in basement getting MitM on your WiFi after hours of tinkering.
It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.
Any realistic migration should probably exist alongside the public CA model for a very long time.
Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.
TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.
There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).
In my ideal world
a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).
b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.
c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.
d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.
Transport security doesn't make 0-days any less of a concern.
> It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
That's a societal/legal problem. Trying to solve those with technological means is generally not a good idea.
> There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
The only people who can realistically MITM your connection are network operators and governments. These can and should be held accountable for their interference. You have no more security that your food wansn't tampered with during transport but somehow you live with that. Similarly security of physical mail is 100% legislative construct.
> You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
I don't but my ISP does. Solutions for malicious actors interfering with routing are needed irrespective of transport security.
> So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Not at all - unless you are also epecting them to have their fluffy kitten postcards checked for Anthrax. In general, it is security people who often need to touch grass because the security model they are working with is entirely divorced from reality.
Interest is probably going to be low but not zero - I often play games long after they have been released and sometimes intentionally using older versions that are no longer supported by current mods.
I am going to cross the street in front of that speeding car because driver will be held liable when I get hit and die.
If there is not even a possibility to hijack the traffic whole range of things just won’t happen. And holding someone liable is not the solution.
I can see why the centralisation is suboptimal (or even actively bad if I'm feeling paranoid!), but other schemes (web of trust, etc.) tend to end up far more complicated for the end user (or their UA). So far no one has come up with a practical alternative without some other disadvantage that would block its general adoption.
> if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
Because if we don't trust those few 3rd parties, we end up having to effectively trust every host on the Internet, which means trusting people and trusting all the people is a bad idea.
Some argue that needing a trusted certificate for just a personal page is extreme, but this one of those cases where the greater good has to win out. For instance: if we train people that self-signed certs are fine to trust in some circumstances, they'll end up clicking OK to trust them in circumstances where they really shouldn't. This can seem a bit nanny-ish, but people are often dumb, or just lazy to the point where it is sometimes indistinguishable from dumb (I'm counting myself here!) so need a bit of nannying. And anyway, if your site doesn't take any input then no browser will (yet) complain about plain HTTP.
> It also discourages naive experimentation
When something could affect security, discouraging naive experimentation on the public network is a good thing IMO. Do those experiments more locally, or at least on hosts you don't expect the public to access.
> Transport security doesn't make 0-days any less of a concern.
It does. Each layer of security doesn't eliminate the problem but does make the attack harder.
Mail and food are different in that there are not limitless scalable attacks that can originate anywhere around the globe.
It does make the actual execution of said attacks significantly harder. To actually hit someone's browser, they need to receive your payload. In the naive case, you can stick it on a webserver you control, but how many people are going to randomly visit your website? Most people visit only a handful of domains on a regular visit, and you've got tops a couple days before your exploit is going to be patched.
So you need to get your payload into the responses from those few domains people are actually making requests from. If you can pwn one of them, fantastic. Serve up your 0-day. But those websites are big, and are constantly under attack. That means you're not going to find any low-hanging fruit vulnerability-wise. Your best bet is trying to get one of them to willing serve your payload, maybe in the guise of an ad or something. Tricky, but not impossible.
But before universal https, you have another option: target the delivery chain. If they connect to a network you control? Pwned. If they use a router with bad security defaults that you find a vulnerability in? Pwned. If they use a small municipal ISP that turns out to have skimped on security? Pwned. Hell, you open up a whole attack vector via controlling an intermediate router at the ISP level. That's not to mention targeting DNS servers.
HTTPS dramatically shrinks the attack surface for the mass distribution unwanted payloads down to basically the high-traffic domains and the CA chain. That's a massive reduction.
> The only people who can realistically MITM your connection are network operators and governments.
Literally anyone can be a network operator. It takes minimal hardware. Coffee shop with wifi? Network operator. Dude popping up a wifi hotspot off his phone? Network operator. Sketchy dude in a black hoodie with a raspberry pi bridging the "Starbucks_guest" as "Starbucks Complimentary Wifi"? Network operator. Putting the security of every packet of web traffic onto "network operators" means drastically reducing internet access.
> You have no more security that your food wasn't tampered with during transport but somehow you live with that.
I've yet to hear of a case where some dude in a basement poisoned a CISCO truck without having to even put on pants. Routers get hacked plenty.
HTTPS is an easy, trivial-cost solution that completely eliminates multiple types of threats, several of which are either have major damage to their target or risk mass exposure, or both. Universal HTTPS is like your car beeping at you when you start moving without your seat belt on: kinda annoying when you're doing a small thing in tightly controlled environments, but has an outstanding risk reduction, and can be ignored with a little headache if you really want to.
However, I think there is no reason at all that a system that is decentralized is not far _far_ simpler to instantiate for a user (not to mention far more secure and private). Crypto gets a lot of hate on HN, but it seems that it is mostly due to people's dislike of anything dealing with 'currency' systems or financial that touch it. This is a despised opinion here, but I am still actually excited for crypto systems that solve real world problems like TLS certs, DNS, et al.
Iroh seems like a _fantastic_, phenomenal system to showcase this idea. It allows for a very fast decentralized web experience on modern cryptography such as Blake3, QUIC, and so on but doesn't really touch any financial stuff at all. Its simply a good system.
I hope we can slowly move to a system that uses the decntralized consensus algorithms created in the crypto space to remove the trust in (typically big, corporate, and likely backdoored) centralized entities that our system today _requires_ without any alternative.
But that analogy of course runs dry rather quick because you can look both ways when crossing street - on the internet as I mentioned you cannot control where data flows and bad actors already proven that they are doing so.
This is why it is not like overpass that you can build where the need is - because for internet traffic the need is everywhere.
Only if you are talking about actual events in which this is happening as a matter of course. Because that's what it is when ISPs inject ads into plain-text HTTP traffic: a matter of course. It's a bit more like saying that we don't have a way to effectively enforce our laws against maliciously reckless driving so we install a series of speed bumps on the road (it's still not quite the same thing because it doesn't make the reckless driving impossible but it does increase the cost).
But it's not like we're talking about agreeable activity here, anyway. This particular case against TLS sounds like a case that favors criticizing an imperfect solution to widespread negative behavior over criticizing the negative behavior. It seems reasonable to look at the speed bumps (which one may or may not find distasteful) and curse the reckless behavior of those who incentivized their construction.
If the website really isn't maintained, then it's only a matter of time until the server is part of a botnet. Setting up LE for a simple site takes half an hour once.