Most active commenters
  • Helmut10001(3)
  • ivankuz(3)

←back to thread

563 points joncfoo | 24 comments | | HN request time: 1.431s | source | bottom
Show context
8organicbits ◴[] No.41205729[source]
My biggest frustration with .internal is that it requires a private certificate authority. Lots of organizations struggle to fully set up trust for the private CA on all internal systems. When you add BYOD or contractor systems, it's a mess.

Using a publicly valid domain offers a number of benefits, like being able to use a free public CA like Lets Encrypt. Every machine will trust your internal certificates out of the box, so there is minimal toil.

Last year I built getlocalcert [1] as a free way to automate this approach. It allows you to register a subdomain, publish TXT records for ACME DNS certificate validation, and use your own internal DNS server for all private use.

[1] https://www.getlocalcert.net/

replies(12): >>41206030 #>>41206106 #>>41206231 #>>41206513 #>>41206719 #>>41206776 #>>41206828 #>>41207112 #>>41208240 #>>41208353 #>>41208964 #>>41210736 #
1. prussian ◴[] No.41206828[source]
Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas. I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.

[1] https://crt.sh/

replies(3): >>41206969 #>>41208715 #>>41208926 #
2. Helmut10001 ◴[] No.41206969[source]
Just use wildcard certs and internal subdomains remain internal information.
replies(2): >>41207432 #>>41210084 #
3. qmarchi ◴[] No.41207432[source]
There's a larger risk that if someone breaches a system with a wildcard cert, then you can end up with them being able to impersonate _every_ part of your domain, not just the one application.
replies(3): >>41207849 #>>41208648 #>>41209814 #
4. politelemon ◴[] No.41207849{3}[source]
It's the opposite - there is a risk, but not a larger risk. Environment traversal is easier through a certificate transparency log, there is almost zero work to do. Through a wildcard compromise, the environment is not immediately visible. It's much safer to do wildcard for certs for internal use.
replies(1): >>41208276 #
5. ixfo ◴[] No.41208276{4}[source]
Environment visibility is easy to get. If you pwn a box which has foo.internal, you can now impersonate foo.internal. If you pwn a box which has *.internal, you can now impersonate super-secret.internal and everything else, and now you're a DNS change away from MITM across an entire estate.

Security by obscurity while making the actual security of endpoints weaker is not an argument in favour of wildcards...

6. eru ◴[] No.41208648{3}[source]
Can't you have a limited wildcard?

Something like *.for-testing-only.company.com?

replies(1): >>41208724 #
7. zikduruqe ◴[] No.41208715[source]
I use https://github.com/FiloSottile/mkcert for my internal stuff.
8. kevincox ◴[] No.41208724{4}[source]
Yes, but then you are putting more information into the publically logged certificate. So it is a tradeoff between scope of certificate and data leak.

I guess you can use a pattern like {human name}.{random}.internal but then you lose memoribility.

replies(2): >>41208788 #>>41208805 #
9. 8organicbits ◴[] No.41208788{5}[source]
I've considered building tools to manage decoy certificates, like it would register mail.example.com if you didn't have a mail server, but I couldn't justify polluting the cert transparency logs.
10. lacerrr ◴[] No.41208805{5}[source]
Made up problem, that approach is fine.
11. moontear ◴[] No.41208926[source]
I wish there was a way to remove public information such as this. Just like historical website ownership records. Maybe interesting for research purposes, but there is so much stuff in public records I don't want everyone to have access to. Should have thought about that before creating public records - but one may not be aware of all the ramifications of e.g. just creating an SSL cert with letsencrypt or registering a random domain name without privacy extensions.
12. qwertox ◴[] No.41209814{3}[source]
I issue a wildcard cert for *.something.example.com.

All subdomains which are meant for public consumption are at the first level, like www.example.com or blog.example.com, and the ones I use internally (or even privately accessible on the internet, like xmpp.something.example.com) are not up for discovery, as no public records exist.

Everything at *.something.example.com, if it is supposed to be privately accessible on the internet, is resolved by a custom DNS server which does not respond to `ANY`-requests and logs every request. You'd need to know which subdomains exist.

something.example.com has an `NS`-record entry with the domain name which points to the IP of that custom DNS server (ns.example.com).

The intranet also has a custom DNS server which then serves the IPs of the subdomains which are only meant for internal consumption.

replies(1): >>41210531 #
13. ivankuz ◴[] No.41210084[source]
A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.

replies(2): >>41210199 #>>41211102 #
14. nightpool ◴[] No.41210199{3}[source]
Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future
replies(1): >>41210476 #
15. ivankuz ◴[] No.41210476{4}[source]
It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.

replies(1): >>41210829 #
16. brewmarche ◴[] No.41210531{4}[source]
This is the DNS setup I’d have in mind as well.

Regarding the certificates, if you don’t want to set up stuff on clients manually, the only drawback is the use of a wildcard certificate (which when compromised can be used to hijack everything under something.example.com).

An intermediate CA with name constraints (can only sign certificates with names under something.example.com) sounds like a better solution if you deem the wildcard certificate too risky. Not sure which CA can issue it (letsencrypt is probably out) and how well supported it is

replies(1): >>41211498 #
17. ploxiln ◴[] No.41210829{5}[source]
That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.

replies(1): >>41212088 #
18. therein ◴[] No.41211102{3}[source]
There is definitely that. There is also some sort of strange bug with Chromium based browsers where you can get a tab to entirely fail making a certain connection. It will not even realize it is not connecting properly. That tab will be broken for that website until you close that tab and open a new one to navigate to that page.

If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.

I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.

I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.

19. qwertox ◴[] No.41211498{5}[source]
I'm "ok" with that risk. It's less risky than other solutions, and there's also the issue that hijacked.something.example.com needs to be resolved by the internal DNS server.

All of this would most likely need to be an inside job with some relatively big criminal energy. At that level you'd probably also have other attack vectors which you could consider.

replies(1): >>41214165 #
20. ivankuz ◴[] No.41212088{6}[source]
Thank you very much for such a clear explanation of what's happening. Yeah, I sensed that it's not a limitation of the nginx per-se, as it was asked not to do ssl termination, hence of course it can't extract header from the scrambled bytes. As I needed it to do grpc through asp.net, it is a kestrel requirement to do ssl termination that forced me to use the ssl-passthrough, which probably comes from a whole different can of worms.
replies(1): >>41246325 #
21. Helmut10001 ◴[] No.41214165{6}[source]
This is also my thinking.. if someone compromises your VM that is responsible for retrieving wildcard certs from let's encrypt, then you're probably busted anyway. Such a machine would usually sit at the center of infrastructure, with limited need to be connected to from other machines.
replies(1): >>41217059 #
22. brewmarche ◴[] No.41217059{7}[source]
Probably most people would deem the risk negligible, but it’s still worth to mention it, since you should evaluate for yourself. Regarding the central machine: the certificate must not only be generated or fetched (which as you said probably will happen “at the center”) but also deployed to the individual services. If you don’t use a central gateway terminating TLS early the certificate will live on many machines, not just “at the center.”
replies(1): >>41231672 #
23. Helmut10001 ◴[] No.41231672{8}[source]
You are absolutely right. And deployment can be set up to open up additional vulnerabilities and holes. But there are also many ways to make the deployment quite robust (e.g. upload via push to a deploy server, distribute from there). ... and just by chance, I've written a small bash script that helps to distribute SSL certificates from a centrally managed "deploy" server 8) [1].

[1]: https://github.com/Sieboldianus/ssl_get

24. nightpool ◴[] No.41246325{7}[source]
> it is a kestrel requirement to do ssl termination

Couldn't you just pass it x-forwarded-proto like any other web server? or use a different self signed key between nginx and kestrel instead?