Most active commenters
  • account42(5)
  • ozim(4)
  • Pannoniae(3)
  • ratorx(3)

←back to thread

489 points gslin | 28 comments | | HN request time: 1.747s | source | bottom
Show context
pests ◴[] No.42191619[source]
It feels like just yesterday I was paying for certs, or worst, just running without.

Can't believe its been ten years.

replies(1): >>42191666 #
ozim ◴[] No.42191666[source]
Can’t believe there are still anti TLS weirdos.
replies(7): >>42191688 #>>42191718 #>>42191893 #>>42192714 #>>42192733 #>>42193057 #>>42193614 #
1. Pannoniae ◴[] No.42191893[source]
TLS is not panacea and it's not universally positive. Here are some arguments against it for balance.

TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.

It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.

It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?

It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.

replies(7): >>42191942 #>>42192026 #>>42192088 #>>42192426 #>>42192479 #>>42193243 #>>42203762 #
2. MrGreenTea ◴[] No.42191942[source]
Regarding the stuff you safe guard: what are your reasons for not sharing them somehow to prevent that loss when (not if) your drive fails?
replies(1): >>42191958 #
3. Pannoniae ◴[] No.42191958[source]
I mean, I do! The music I have I put on Soulseek, although the more obscure stuff hasn't been downloaded yet. I also have fairly old video game mods - I don't even know where to share them or if anyone would be interested at all.
replies(2): >>42192304 #>>42192699 #
4. ratorx ◴[] No.42192026[source]
> if I want to host a website …

The fundamental problem is a question of trust. There’s three ways:

* Well known validation authority (the public TLS model)

* TOFU (the default SSH model)

* Pre-distribute your public keys (the self-signed certificate model)

Are there any alternatives?

If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.

replies(2): >>42192098 #>>42192280 #
5. Sesse__ ◴[] No.42192088[source]
The handshake doesn't primarily depend on AES; it is typically a Diffie-Hellman variant (which doesn't have any acceleration) that takes time. Anyway, you're hopefully using TLS 1.3 by now, where you can use ChaCha20 instead of AES :-)
6. rocqua ◴[] No.42192098[source]
There is web of trust, where you trust people that are trusted by your friends.

There's issues with it, but it is an alternative model, and I could see it being made to work.

replies(2): >>42192225 #>>42192633 #
7. ratorx ◴[] No.42192225{3}[source]
Ah, I forgot about that and never really considered it because GPG is so annoying to use, but it is fairly reasonable.

I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.

I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.

8. xorcist ◴[] No.42192280[source]
> Are there any alternatives?

The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).

There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.

replies(1): >>42192322 #
9. tomalbrc ◴[] No.42192304{3}[source]
The Internet Archive?
10. ratorx ◴[] No.42192322{3}[source]
How would you validate whether a certificate was signed by a registrar or not?

If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.

replies(2): >>42192474 #>>42200482 #
11. ozim ◴[] No.42192426[source]
*It also discourages naive experimentation* that's the point where if you put on silly website no one can easily MitM it when its data is sent across the globe and use 0-day in browser on "fluffy kittens page".

Biggest problem that Edward Snowden uncovered was - this stuff was happening and was happening en-mass FULLY AUTOMATED - it wasn't some kid in basement getting MitM on your WiFi after hours of tinkering.

It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.

There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.

You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.

So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.

replies(1): >>42192545 #
12. xorcist ◴[] No.42192474{4}[source]
How do you validate any certificate? You'd have to trust the registrar, presumably like you trust any one CA today. The web browsers do a decent job keeping up to date with this and new top domains aren't added on a daily basis anyway.

Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.

Any realistic migration should probably exist alongside the public CA model for a very long time.

13. account42 ◴[] No.42192479[source]
I find the lack of backwards compatibility also concerning - and that is not something can be fixed as deprecation of old SSL/TLS versions and ciphers is intentional.

Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.

TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.

There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).

In my ideal world

a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).

b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.

c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.

d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.

14. account42 ◴[] No.42192545[source]
> It also discourages naive experimentation that's the point where if you put on silly website no one can easily MitM it when its data is sent across the globe and use 0-day in browser on "fluffy kittens page".

Transport security doesn't make 0-days any less of a concern.

> It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.

That's a societal/legal problem. Trying to solve those with technological means is generally not a good idea.

> There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.

The only people who can realistically MITM your connection are network operators and governments. These can and should be held accountable for their interference. You have no more security that your food wansn't tampered with during transport but somehow you live with that. Similarly security of physical mail is 100% legislative construct.

> You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.

I don't but my ISP does. Solutions for malicious actors interfering with routing are needed irrespective of transport security.

> So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.

Not at all - unless you are also epecting them to have their fluffy kitten postcards checked for Anthrax. In general, it is security people who often need to touch grass because the security model they are working with is entirely divorced from reality.

replies(3): >>42192804 #>>42193262 #>>42193846 #
15. account42 ◴[] No.42192633{3}[source]
Providing initial trust via hyperlinks could be interesting.
16. account42 ◴[] No.42192699{3}[source]
You could try to upload them to modding sites (preferrably not onces with a longin requirement for downloading) if you don't want to host them yourself. That can be either general modding archives or game-specific community sites - the latter are smaller but more likely to be interested in older mods. Make sure that whatever host you use can be crawled by the internet archive.

Interest is probably going to be low but not zero - I often play games long after they have been released and sometimes intentionally using older versions that are no longer supported by current mods.

replies(1): >>42193253 #
17. ozim ◴[] No.42192804{3}[source]
All I got from your explanation is:

I am going to cross the street in front of that speeding car because driver will be held liable when I get hit and die.

If there is not even a possibility to hijack the traffic whole range of things just won’t happen. And holding someone liable is not the solution.

replies(2): >>42192863 #>>42192864 #
18. wizzwizz4 ◴[] No.42192863{4}[source]
Technological measures don't make things impossible: they make them harder. And they rarely solve all the consequences of a problem: only the ones that have been explicitly identified.
19. account42 ◴[] No.42192864{4}[source]
The situation is more akin to demanding that pedestrians should be prevented from crossing the road at all cost because a malicious driver could ignore all red lights. And of course banning pedestrias ins't enough. After all, motorcyles are also pretty unsafe so we ban those too. But you see someone could also be pointing a bazooka at the road so then we require all cars to have sufficient armor plating in order to be allowed on the road. That is, before realizing that portable nukes exists and you never know who has one. We don't do that. Instead we develop specific solutions (e.g. an over/underpass for high risk intersections, walls for highways) where they are actually needed without loosing sight of the unreasonable cost (not just monetary) that demanding zero risk would impose.
replies(2): >>42194300 #>>42198633 #
20. dspillett ◴[] No.42193243[source]
> It is also a very centralised model

I can see why the centralisation is suboptimal (or even actively bad if I'm feeling paranoid!), but other schemes (web of trust, etc.) tend to end up far more complicated for the end user (or their UA). So far no one has come up with a practical alternative without some other disadvantage that would block its general adoption.

> if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?

Because if we don't trust those few 3rd parties, we end up having to effectively trust every host on the Internet, which means trusting people and trusting all the people is a bad idea.

Some argue that needing a trusted certificate for just a personal page is extreme, but this one of those cases where the greater good has to win out. For instance: if we train people that self-signed certs are fine to trust in some circumstances, they'll end up clicking OK to trust them in circumstances where they really shouldn't. This can seem a bit nanny-ish, but people are often dumb, or just lazy to the point where it is sometimes indistinguishable from dumb (I'm counting myself here!) so need a bit of nannying. And anyway, if your site doesn't take any input then no browser will (yet) complain about plain HTTP.

> It also discourages naive experimentation

When something could affect security, discouraging naive experimentation on the public network is a good thing IMO. Do those experiments more locally, or at least on hosts you don't expect the public to access.

replies(1): >>42194201 #
21. Pannoniae ◴[] No.42193253{4}[source]
You are entirely right - although I'd have to be careful with uploading it and where because on Steam Workshop, there's assholes who threaten to DMCA you without basis and there are similar problems on other sites too. But I'll look around :)
22. hehehheh ◴[] No.42193262{3}[source]
Counterpounts:

> Transport security doesn't make 0-days any less of a concern.

It does. Each layer of security doesn't eliminate the problem but does make the attack harder.

Mail and food are different in that there are not limitless scalable attacks that can originate anywhere around the globe.

23. OkayPhysicist ◴[] No.42193846{3}[source]
> transport security doesn't make 0-days any less of a concern.

It does make the actual execution of said attacks significantly harder. To actually hit someone's browser, they need to receive your payload. In the naive case, you can stick it on a webserver you control, but how many people are going to randomly visit your website? Most people visit only a handful of domains on a regular visit, and you've got tops a couple days before your exploit is going to be patched.

So you need to get your payload into the responses from those few domains people are actually making requests from. If you can pwn one of them, fantastic. Serve up your 0-day. But those websites are big, and are constantly under attack. That means you're not going to find any low-hanging fruit vulnerability-wise. Your best bet is trying to get one of them to willing serve your payload, maybe in the guise of an ad or something. Tricky, but not impossible.

But before universal https, you have another option: target the delivery chain. If they connect to a network you control? Pwned. If they use a router with bad security defaults that you find a vulnerability in? Pwned. If they use a small municipal ISP that turns out to have skimped on security? Pwned. Hell, you open up a whole attack vector via controlling an intermediate router at the ISP level. That's not to mention targeting DNS servers.

HTTPS dramatically shrinks the attack surface for the mass distribution unwanted payloads down to basically the high-traffic domains and the CA chain. That's a massive reduction.

> The only people who can realistically MITM your connection are network operators and governments.

Literally anyone can be a network operator. It takes minimal hardware. Coffee shop with wifi? Network operator. Dude popping up a wifi hotspot off his phone? Network operator. Sketchy dude in a black hoodie with a raspberry pi bridging the "Starbucks_guest" as "Starbucks Complimentary Wifi"? Network operator. Putting the security of every packet of web traffic onto "network operators" means drastically reducing internet access.

> You have no more security that your food wasn't tampered with during transport but somehow you live with that.

I've yet to hear of a case where some dude in a basement poisoned a CISCO truck without having to even put on pants. Routers get hacked plenty.

HTTPS is an easy, trivial-cost solution that completely eliminates multiple types of threats, several of which are either have major damage to their target or risk mass exposure, or both. Universal HTTPS is like your car beeping at you when you start moving without your seat belt on: kinda annoying when you're doing a small thing in tightly controlled environments, but has an outstanding risk reduction, and can be ignored with a little headache if you really want to.

24. chaxor ◴[] No.42194201[source]
I agree that centralization is bad, and one of the worst parts of HTTPS (the other being that things like ed22519 systems, chacha20, poly1305, sntrup are generally viewed as better modern alternatives to AES, so postquantum system like rosenpass https://github.com/rosenpass/rosenpass are more preferable).

However, I think there is no reason at all that a system that is decentralized is not far _far_ simpler to instantiate for a user (not to mention far more secure and private). Crypto gets a lot of hate on HN, but it seems that it is mostly due to people's dislike of anything dealing with 'currency' systems or financial that touch it. This is a despised opinion here, but I am still actually excited for crypto systems that solve real world problems like TLS certs, DNS, et al.

Iroh seems like a _fantastic_, phenomenal system to showcase this idea. It allows for a very fast decentralized web experience on modern cryptography such as Blake3, QUIC, and so on but doesn't really touch any financial stuff at all. Its simply a good system.

I hope we can slowly move to a system that uses the decntralized consensus algorithms created in the crypto space to remove the trust in (typically big, corporate, and likely backdoored) centralized entities that our system today _requires_ without any alternative.

25. ozim ◴[] No.42194300{5}[source]
For me TLS is an overpass - yeah it costs more to build it, pedestrians have to climb the stairs to get on the other side but it is worth it. Then hopefully we have Let's Encrypt that can be an elevator/lift so pedestrians don't have to climb the stairs.

But that analogy of course runs dry rather quick because you can look both ways when crossing street - on the internet as I mentioned you cannot control where data flows and bad actors already proven that they are doing so.

This is why it is not like overpass that you can build where the need is - because for internet traffic the need is everywhere.

26. lcnPylGDnU4H9OF ◴[] No.42198633{5}[source]
> The situation is more akin to demanding that pedestrians should be prevented from crossing the road at all cost because a malicious driver could ignore all red lights.

Only if you are talking about actual events in which this is happening as a matter of course. Because that's what it is when ISPs inject ads into plain-text HTTP traffic: a matter of course. It's a bit more like saying that we don't have a way to effectively enforce our laws against maliciously reckless driving so we install a series of speed bumps on the road (it's still not quite the same thing because it doesn't make the reckless driving impossible but it does increase the cost).

But it's not like we're talking about agreeable activity here, anyway. This particular case against TLS sounds like a case that favors criticizing an imperfect solution to widespread negative behavior over criticizing the negative behavior. It seems reasonable to look at the speed bumps (which one may or may not find distasteful) and curse the reckless behavior of those who incentivized their construction.

27. tptacek ◴[] No.42200482{4}[source]
A recent thread going into details of why (only a tiny fraction of zones are signed, in North America that count has gone sharply down over recent intervals, and browsers don't support it):

https://news.ycombinator.com/item?id=41916478

28. bmicraft ◴[] No.42203762[source]
> It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.

If the website really isn't maintained, then it's only a matter of time until the server is part of a botnet. Setting up LE for a simple site takes half an hour once.