←back to thread

569 points todsacerdoti | 4 comments | | HN request time: 0.001s | source
Show context
imoreno ◴[] No.42599386[source]
I agree with most of this. If every website followed these, the web would be heaven (again)...

But why this one?

>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.

What is wrong with redirecting 80 to 443 in today's world?

Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.

These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.

replies(10): >>42599423 #>>42599448 #>>42599461 #>>42599916 #>>42600279 #>>42601148 #>>42605479 #>>42605998 #>>42609172 #>>42627972 #
nayuki ◴[] No.42599461[source]
When you force TLS/HTTPS, you are committing both yourself (server) and the reader (client) to a perpetual treadmill of upgrades (a.k.a. churn). This isn't a value judgement, it is a fact; it is a positive statement, not a normative statement. Roughly speaking, the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum - or else they are not compatible.

For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.

For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.

I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.

I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.

For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.

replies(2): >>42599484 #>>42606664 #
1. imoreno ◴[] No.42599484[source]
>the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum

That's a fair point. HTTP changes more slowly. Makes sense for sites where you're aiming for longevity.

replies(1): >>42600064 #
2. lstamour ◴[] No.42600064[source]
Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html highlights that many clients support standard SSL features without having to update to fix bugs. How much SSL you choose to allow and what configurations is between you and your... I dunno, PCI-DSS auditor or something.

I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.

... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.

I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.

As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.

My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.

Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)

So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.

replies(1): >>42607635 #
3. marcosdumay ◴[] No.42607635[source]
> Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html

Oh, if only TLS was that simple!

People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.

replies(1): >>42660512 #
4. lstamour ◴[] No.42660512{3}[source]
Well, there is this: https://clienttest.ssllabs.com:8443/ssltest/viewMyClient.htm... But you’d have to test your own clients.

One imagines though that with enough clients connecting to your site you’ll end up seeing every type of incompatible client eventually.

The point I was trying to make is that removing SSL doesn’t make your site compatible and the number of incompatible clients is small compared to the number of compatible ones. Compatibility alone is not a reason to not use SSL on its own, arguably. The list of incompatibility doesn’t stop at SSL, there’a still DNS, IPv6 and so on.

SSL is usually compatible for most people - enough that it has basically become the defacto default for the web at large. Though there are still issues. CMOS batteries dying and having bad client time is one that comes to mind first, certificate chain issues too. SSL is complex, no doubt. Especially for server-side implementation to remain compatible client-side. That’s why tools like Qualys’ exist in the first place!