←back to thread

563 points joncfoo | 1 comments | | HN request time: 0.234s | source
Show context
jcrites ◴[] No.41205444[source]
Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.

You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.

Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.

When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.

replies(13): >>41205463 #>>41205469 #>>41205498 #>>41205661 #>>41205688 #>>41205794 #>>41205855 #>>41206117 #>>41206438 #>>41206450 #>>41208973 #>>41209122 #>>41209942 #
ghshephard ◴[] No.41205855[source]
Number one reason that comes to mind is you prevent the possibility of information leakage. You can't screw up your split-dns configuration and end up leaking your internal IP space if everything is .internal.

It's much the same reason why some very large IPv6 services deploy some protected IPv6 space in RFC4193 FC::/7 space. Of course you have firewalls. And of course you have all sorts of layers of IDS and air-gaps as appropriate. But, if by design you don't want to make this space reachable outside the enterprise - the extra steps are a belt and suspenders approach.

So, even if I mess up my firewall rules and do leak a critical control point: FD41:3165:4215:0001:0013:50ff:fe12:3456 - you wouldn't be able to route to it anyways.

Same thing with .internal - that will never be advertised externally.

replies(2): >>41206065 #>>41206437 #
nox101 ◴[] No.41206437[source]
What about things like cookies, storage, caching, etc.. If my job has `https://testing.internal` and some company I visit also has `https://testing.internal` ...
replies(7): >>41206514 #>>41206772 #>>41206925 #>>41207033 #>>41207498 #>>41208027 #>>41209643 #
fulafel ◴[] No.41207498[source]
Yep, ambiguous addressing doesn't save you, same as 10.x IPv4 networks. And one day you'll need to connect or merge or otherwise coexist with disparate uses if it's a common one (like in .internal and 10.x)...
replies(1): >>41208740 #
kevincox ◴[] No.41208740[source]
IPv6 solves this as you are strongly recommend to use a random component at the top of the internal reserved space. So the chance of a collision is quite low.
replies(2): >>41209283 #>>41210781 #
1. pas ◴[] No.41209283[source]
there's some list of ULA ranges allocated to organizations, no?

edit: ah, unfortunately it's not really standard, just a grassroots effort https://ungleich.ch/u/projects/ipv6ula/