←back to thread

563 points joncfoo | 1 comments | | HN request time: 0.205s | source
Show context
jcrites ◴[] No.41205444[source]
Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.

You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.

Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.

When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.

replies(13): >>41205463 #>>41205469 #>>41205498 #>>41205661 #>>41205688 #>>41205794 #>>41205855 #>>41206117 #>>41206438 #>>41206450 #>>41208973 #>>41209122 #>>41209942 #
leeter ◴[] No.41205469[source]
I can't speak for others but HSTS is a major reason. Not everybody wants to deal with setting up certs for every single application on a network but they want HSTS preload externally. I get why for AWS the solution of having everything from a .com works. But for a lot of small businesses it's just more than they want to deal with.

Another reason is information leakage. Having DNS records leak could actually provide potential information on things you'd rather not have public. Devs can be remarkably insensitive to the fact they are leaking information through things like domains.

replies(1): >>41205500 #
jcrites ◴[] No.41205500[source]
> Having DNS records leak could actually provide potential information on things you'd rather not have public.

This is true, but using a regular domain name as your root does not require you to actually publish those DNS records on the Internet.

For example, say that you own the domain `example.com`. You can build a private service `foo.example.com` and only publish its DNS records within the networks where it needs to be resolved – in exactly the same way that you would with `foo.internal`.

If you ever decide that you want an Internet-facing endpoint, just publish `foo.example.com` in public DNS.

replies(3): >>41205512 #>>41206255 #>>41207896 #
1. leeter ◴[] No.41205512[source]
I'm not disagreeing at all. But Hanlon's Razor applies:

> Never attribute to malice what can better be explained by incompetence

You can't leak information if you never give access to that zone in any way. More than once I've run into well meaning developers in my time. Having a .internal inherently documents that something shouldn't be public. Whereas foo.example.com does not.