←back to thread

563 points joncfoo | 3 comments | | HN request time: 0.629s | source
Show context
8organicbits ◴[] No.41205729[source]
My biggest frustration with .internal is that it requires a private certificate authority. Lots of organizations struggle to fully set up trust for the private CA on all internal systems. When you add BYOD or contractor systems, it's a mess.

Using a publicly valid domain offers a number of benefits, like being able to use a free public CA like Lets Encrypt. Every machine will trust your internal certificates out of the box, so there is minimal toil.

Last year I built getlocalcert [1] as a free way to automate this approach. It allows you to register a subdomain, publish TXT records for ACME DNS certificate validation, and use your own internal DNS server for all private use.

[1] https://www.getlocalcert.net/

replies(12): >>41206030 #>>41206106 #>>41206231 #>>41206513 #>>41206719 #>>41206776 #>>41206828 #>>41207112 #>>41208240 #>>41208353 #>>41208964 #>>41210736 #
prussian ◴[] No.41206828[source]
Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas. I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.

[1] https://crt.sh/

replies(3): >>41206969 #>>41208715 #>>41208926 #
Helmut10001 ◴[] No.41206969[source]
Just use wildcard certs and internal subdomains remain internal information.
replies(2): >>41207432 #>>41210084 #
ivankuz ◴[] No.41210084[source]
A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.

replies(2): >>41210199 #>>41211102 #
nightpool ◴[] No.41210199[source]
Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future
replies(1): >>41210476 #
ivankuz ◴[] No.41210476[source]
It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.

replies(1): >>41210829 #
1. ploxiln ◴[] No.41210829[source]
That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.

replies(1): >>41212088 #
2. ivankuz ◴[] No.41212088[source]
Thank you very much for such a clear explanation of what's happening. Yeah, I sensed that it's not a limitation of the nginx per-se, as it was asked not to do ssl termination, hence of course it can't extract header from the scrambled bytes. As I needed it to do grpc through asp.net, it is a kestrel requirement to do ssl termination that forced me to use the ssl-passthrough, which probably comes from a whole different can of worms.
replies(1): >>41246325 #
3. nightpool ◴[] No.41246325[source]
> it is a kestrel requirement to do ssl termination

Couldn't you just pass it x-forwarded-proto like any other web server? or use a different self signed key between nginx and kestrel instead?