←back to thread

489 points gslin | 1 comments | | HN request time: 0.213s | source
Show context
pests ◴[] No.42191619[source]
It feels like just yesterday I was paying for certs, or worst, just running without.

Can't believe its been ten years.

replies(1): >>42191666 #
ozim ◴[] No.42191666[source]
Can’t believe there are still anti TLS weirdos.
replies(7): >>42191688 #>>42191718 #>>42191893 #>>42192714 #>>42192733 #>>42193057 #>>42193614 #
Pannoniae ◴[] No.42191893[source]
TLS is not panacea and it's not universally positive. Here are some arguments against it for balance.

TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.

It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.

It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?

It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.

replies(6): >>42191942 #>>42192026 #>>42192088 #>>42192426 #>>42192479 #>>42193243 #
1. account42 ◴[] No.42192479[source]
I find the lack of backwards compatibility also concerning - and that is not something can be fixed as deprecation of old SSL/TLS versions and ciphers is intentional.

Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.

TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.

There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).

In my ideal world

a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).

b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.

c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.

d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.