←back to thread

563 points joncfoo | 1 comments | | HN request time: 0.001s | source
Show context
jcrites ◴[] No.41205444[source]
Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.

You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.

Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.

When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.

replies(13): >>41205463 #>>41205469 #>>41205498 #>>41205661 #>>41205688 #>>41205794 #>>41205855 #>>41206117 #>>41206438 #>>41206450 #>>41208973 #>>41209122 #>>41209942 #
briHass ◴[] No.41209122[source]
I just got burned on my home network by running my own CA (.home) and DNS for connected devices. The Android warning when installing a self-signed CA ('someone may be monitoring this network') is fine for my case, if annoying, but my current blocker is using webhooks from a security camera to Home Assistant.

HA allows you to use a self-signed cert, but if you turn on HTTPS, your webhook endpoints must also use HTTPS with that cert. The security camera doesn't allow me to mess with its certificate store, so it's not going to call a webhook endpoint with a self-signed/untrusted root cert.

Sure, I could probably run a HTTP->HTTPS proxy that would ignore my cert, but it all starts to feel like a massive kludge to be your own CA. Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

Trying to save a few bucks by not buying a vanity domain for internal/test stuff just isn't worth the effort. Most systems (HA included) support ACME clients to get free certs, and I guess for IoT stuff, you could still do one-off self-signed certs with long expiration periods, since there's no way to automate rotation of wildcards for LE.

replies(2): >>41209261 #>>41212729 #
1. xp84 ◴[] No.41212729[source]
Something you may find helpful: I use a `cloudflared` tunnel to add an ssl endpoint for use outside my home, without opening any holes in the firewall. This way HA doesn’t care about it (it still works on 10.x.y.z) and your internal webhooks can still be plain http if you want.