Most active commenters

    ←back to thread

    563 points joncfoo | 50 comments | | HN request time: 1.507s | source | bottom
    1. jcrites ◴[] No.41205444[source]
    Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

    It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.

    You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.

    Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.

    When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.

    replies(13): >>41205463 #>>41205469 #>>41205498 #>>41205661 #>>41205688 #>>41205794 #>>41205855 #>>41206117 #>>41206438 #>>41206450 #>>41208973 #>>41209122 #>>41209942 #
    2. colejohnson66 ◴[] No.41205463[source]
    Why? Remember the .dev debacle?
    3. leeter ◴[] No.41205469[source]
    I can't speak for others but HSTS is a major reason. Not everybody wants to deal with setting up certs for every single application on a network but they want HSTS preload externally. I get why for AWS the solution of having everything from a .com works. But for a lot of small businesses it's just more than they want to deal with.

    Another reason is information leakage. Having DNS records leak could actually provide potential information on things you'd rather not have public. Devs can be remarkably insensitive to the fact they are leaking information through things like domains.

    replies(1): >>41205500 #
    4. zzo38computer ◴[] No.41205498[source]
    Sometimes it may be reasonable to use subdomains of other domain names that you have registered, but I would think that sometimes it would not be appropriate, such as if you are not using it with internet at all and therefore should not need to register a domain name, or for other reasons; if it is not necessary to use internet domain names then you would likely want to avoid it (or, at least, I would).
    5. jcrites ◴[] No.41205500[source]
    > Having DNS records leak could actually provide potential information on things you'd rather not have public.

    This is true, but using a regular domain name as your root does not require you to actually publish those DNS records on the Internet.

    For example, say that you own the domain `example.com`. You can build a private service `foo.example.com` and only publish its DNS records within the networks where it needs to be resolved – in exactly the same way that you would with `foo.internal`.

    If you ever decide that you want an Internet-facing endpoint, just publish `foo.example.com` in public DNS.

    replies(3): >>41205512 #>>41206255 #>>41207896 #
    6. leeter ◴[] No.41205512{3}[source]
    I'm not disagreeing at all. But Hanlon's Razor applies:

    > Never attribute to malice what can better be explained by incompetence

    You can't leak information if you never give access to that zone in any way. More than once I've run into well meaning developers in my time. Having a .internal inherently documents that something shouldn't be public. Whereas foo.example.com does not.

    7. quectophoton ◴[] No.41205661[source]
    > Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

    That assumes you are able to pay to rent a domain name, and keep paying for it, and that you are reasonably sure that the company you're renting it from is not going to take it away from you because of a selectively-enforced TOS, and that you are reasonably sure that both yourself and your registrar are doing anything possible to avoid getting your account compromised (resulting in your domain being transferred to someone else's and probably lost forever unless you can take legal action).

    So it might depend on your threat model.

    Also, a good example, and maybe the main reason for this specific name instead of other proposals, is that big corps are already using it (e.g. DNS search domains in AWS EC2 instances) and don't want someone else to register it.

    replies(1): >>41206496 #
    8. bawolff ◴[] No.41205688[source]
    I think there is a benefit that it reduces possibility of misconfiguration. You can't accidentally publish .internal. If you see a .internal name, there is never any possibility of confusion on that point.
    replies(4): >>41205812 #>>41205930 #>>41206864 #>>41206947 #
    9. pid-1 ◴[] No.41205794[source]
    > leading Amazon's strategy for cloud-native AWS usage internally

    I've been on the other end of the business scale for the past decade, mostly working for SMBs like hedge funds.

    That made me a huge private DNS hater. So much trouble for so little security gain.

    Still, it seems common knowledge is to use private DNS for internal apps, AD and such, LAN hostnames and likes.

    I've been using public DNS exclusively everywhere I've worked and I always feel like it's one of the best arch decisions I'm bringing to the table.

    replies(1): >>41219613 #
    10. samstave ◴[] No.41205812[source]
    This. And it allows for much easier/trustworthy automated validation of [pipeline] - such as ensuring that something doesnt leak, exfil, egress inadvertently. (even perhaps with exclusive/unique routing?)
    11. ghshephard ◴[] No.41205855[source]
    Number one reason that comes to mind is you prevent the possibility of information leakage. You can't screw up your split-dns configuration and end up leaking your internal IP space if everything is .internal.

    It's much the same reason why some very large IPv6 services deploy some protected IPv6 space in RFC4193 FC::/7 space. Of course you have firewalls. And of course you have all sorts of layers of IDS and air-gaps as appropriate. But, if by design you don't want to make this space reachable outside the enterprise - the extra steps are a belt and suspenders approach.

    So, even if I mess up my firewall rules and do leak a critical control point: FD41:3165:4215:0001:0013:50ff:fe12:3456 - you wouldn't be able to route to it anyways.

    Same thing with .internal - that will never be advertised externally.

    replies(2): >>41206065 #>>41206437 #
    12. mnahkies ◴[] No.41205930[source]
    Somewhat off topic, but I'm a big fan of fail safe setups.

    One of the (relatively few) things that frustrate me about GKE is the integration with GCP IAP and k8 gateways - it's a separate resource to the http route and if you fail to apply it, or apply one with invalid configuration then it fails open.

    I'd much prefer an interface where I could specify my intention next to the route and have it fail atomically and/or fail closed

    13. ◴[] No.41206065[source]
    14. TheRealPomax ◴[] No.41206117[source]
    Pretty much "anything that has to use a real network address, resolved via DNS" rather than using the hosts file based loopback device, or the broadcast IP.
    15. nine_k ◴[] No.41206255{3}[source]
    The wisdom goes: "Make invalid states unrepresentable".

    In this case, foo.internal cannot represent a publicly accessible domain, much like 10.x.x.x cannot represent a publicly routable IP address.

    No matter how badly you misconfigure things, you are still protected from exposure. Sometimes it's really valuable.

    16. nox101 ◴[] No.41206437[source]
    What about things like cookies, storage, caching, etc.. If my job has `https://testing.internal` and some company I visit also has `https://testing.internal` ...
    replies(7): >>41206514 #>>41206772 #>>41206925 #>>41207033 #>>41207498 #>>41208027 #>>41209643 #
    17. slashdave ◴[] No.41206438[source]
    > it's helpful to have that flexibility in the future

    On the contrary, it is helpful to make this is impossible. Otherwise you invite leaking private info by configuration mistake.

    18. johannes1234321 ◴[] No.41206450[source]
    A big area are consumer devices like WiFi routers. They can advertise the .internal name and probably even get TLS certificates for those names and things may work.

    See for instance the trouble with AVM's fritz.box domain, which was used by their routers by default, then .box wasade an TLD and AVM was too late to register it.

    19. justin_oaks ◴[] No.41206496[source]
    If you control the DNS resolution in your company and use an internal certificate authority, technically you don't have to rent a domain name. You can control how it resolves and "hijack" whatever domain name you want. It won't be valid outside your organization/network, but if you're using it only for internal purposes then that doesn't matter.

    Of course, this is a bad idea, but it does allow you to avoid the "rent".

    replies(2): >>41206874 #>>41208830 #
    20. dudus ◴[] No.41206514{3}[source]
    Great question. I think they leak but this happens regardless.
    21. kelnos ◴[] No.41206772{3}[source]
    Presumably you don't trust the CA that signed the certificate on the server at the company you're visiting. As long as you heed the certificate error and don't visits the site, you're fine.
    replies(2): >>41207631 #>>41210535 #
    22. zrm ◴[] No.41206864[source]
    > You can't accidentally publish .internal.

    Well sure you can. You expose your internal DNS servers to the internet, or use the same DNS servers for both and they're on the internet. The root servers are not going to delegate a request for .internal to your nameservers, but anybody can make the request directly to your servers if they're publicly accessible.

    23. zrm ◴[] No.41206874{3}[source]
    One of the reasons that it's a bad idea is that whoever does have the domain can get a certificate for any name under it from any public CA, which your devices would generally still trust in addition to your private CA.
    24. thebeardisred ◴[] No.41206925{3}[source]
    May god have mercy on the person using this in their mobile applications.
    25. thebeardisred ◴[] No.41206947[source]
    Additionally how do you define publish?

    When someone embeds https://test.internal with a cert validation turned off (rather then fingerprint pinning or setting up an internal CA) in their mobile application that client will greedily accept whatever response is provided by their local resolver... Correct or malicious.

    replies(1): >>41207909 #
    26. mrkstu ◴[] No.41207033{3}[source]
    I'm assuming you wouldn't import their CA as authoritative just to use their wifi...
    27. fulafel ◴[] No.41207498{3}[source]
    Yep, ambiguous addressing doesn't save you, same as 10.x IPv4 networks. And one day you'll need to connect or merge or otherwise coexist with disparate uses if it's a common one (like in .internal and 10.x)...
    replies(1): >>41208740 #
    28. hsbauauvhabzb ◴[] No.41207631{4}[source]
    So we’re back to trusting the user?
    replies(1): >>41208583 #
    29. luma ◴[] No.41207896{3}[source]
    It's not DNS that's leaking those names, it's certificate transparency. If you are using certs on foo.example.com, that's publicly discoverable due to CTLs. As others have mentioned here it leaves you with a dilemma, either you have good working certs internally but are also exposing all of your internal hostnames, or you keep your hostnames private but have cert problems (either dealing with trusting a private CA or dealing with not having certs).
    30. bawolff ◴[] No.41207909{3}[source]
    That seems kind of besides the point. If you turn off cert validation, it doesn't matter if the domain name is internal or external.
    31. viraptor ◴[] No.41208027{3}[source]
    Ideally, you use "testing.company-name.internal" for that kind of things. (Especially if you think you'll ever end up interacting at that level)
    32. 0l ◴[] No.41208583{5}[source]
    Use HSTS, browsers are specifically designed not to let users bypass these.
    replies(1): >>41208816 #
    33. kevincox ◴[] No.41208740{4}[source]
    IPv6 solves this as you are strongly recommend to use a random component at the top of the internal reserved space. So the chance of a collision is quite low.
    replies(2): >>41209283 #>>41210781 #
    34. hsbauauvhabzb ◴[] No.41208816{6}[source]
    Hsts forces encryption, it has no impact on certificate invalidity, at least to my knowledge.
    replies(1): >>41209055 #
    35. OJFord ◴[] No.41208830{3}[source]
    But then you still need a private CA (public one is going to resolve the domain correctly and find you don't control it) so you may as well have used .internal?
    36. layer8 ◴[] No.41208973[source]
    Read section 2.1 of the linked https://itp.cdn.icann.org/en/files/security-and-stability-ad... for some motivations.
    37. 0l ◴[] No.41209055{7}[source]
    Visit your .internal site -> website uses TLS cert signed by root CA that is preloaded on your device. Succeeds and HSTS flag is set.

    Visit other .internal site -> uses TLS cert NOT signed by root CA that is preloaded on your device -> certificate error, and cannot be bypassed due to HSTS.

    38. briHass ◴[] No.41209122[source]
    I just got burned on my home network by running my own CA (.home) and DNS for connected devices. The Android warning when installing a self-signed CA ('someone may be monitoring this network') is fine for my case, if annoying, but my current blocker is using webhooks from a security camera to Home Assistant.

    HA allows you to use a self-signed cert, but if you turn on HTTPS, your webhook endpoints must also use HTTPS with that cert. The security camera doesn't allow me to mess with its certificate store, so it's not going to call a webhook endpoint with a self-signed/untrusted root cert.

    Sure, I could probably run a HTTP->HTTPS proxy that would ignore my cert, but it all starts to feel like a massive kludge to be your own CA. Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

    Trying to save a few bucks by not buying a vanity domain for internal/test stuff just isn't worth the effort. Most systems (HA included) support ACME clients to get free certs, and I guess for IoT stuff, you could still do one-off self-signed certs with long expiration periods, since there's no way to automate rotation of wildcards for LE.

    replies(2): >>41209261 #>>41212729 #
    39. yjftsjthsd-h ◴[] No.41209261[source]
    > Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

    Depending on your threat model, I'm not sure that's true. Encryption without verification prevents a passive observer from seeing the content of a connection, but does nothing to prevent an active MITM from decrypting it.

    replies(1): >>41210754 #
    40. pas ◴[] No.41209283{5}[source]
    there's some list of ULA ranges allocated to organizations, no?

    edit: ah, unfortunately it's not really standard, just a grassroots effort https://ungleich.ch/u/projects/ipv6ula/

    41. mrighele ◴[] No.41209643{3}[source]
    I would expect ACME to use https://testing.acme.internal, and not just https://testing.internal, that would remove most of the incidental clashes (not malicious ones, of course).
    42. macromaniac ◴[] No.41209942[source]
    >Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

    These local TLDs should IMO be used on all home routers, it fixes a lot of problems.

    If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.

    Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

    Also, p. sure I grew up playing wc3 w you?

    replies(1): >>41210333 #
    43. e28eta ◴[] No.41210333[source]
    > Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

    dnsmasq has this feature. I think it’s commonly available in alternative router firmware.

    On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.

    There are undoubtably other options, but these are the two I’ve worked with.

    replies(1): >>41210394 #
    44. macromaniac ◴[] No.41210394{3}[source]
    Wasn't aware of dnsmasq/pihole, I have a BIND9 configured to do it on my network and yeah its much nicer. I've seen people get bit by this all the time in college and still even now join projects with like weird hosts file usage. Instead of having 3 different systems for apple/ms/linux name resolution that don't interop the problem is better fixed higher up.
    45. thayne ◴[] No.41210535{4}[source]
    Now suppose you are a contractor who did some work for company A, then went to do some work for company B, and still have some cookies set from A's internal site.
    46. briHass ◴[] No.41210754{3}[source]
    I meant more: centralized verification. I'm fine with deploying a self-CA cert to verify in my personal world, but browsers and devices have become increasingly hostile to certs that aren't signed by the standard players.
    47. fulafel ◴[] No.41210781{5}[source]
    There's usually little reason to use reserved space vs internet addresses, unless you just want to relive the pain of NAT+IPv4. The exception is if you lack PI space and can't copy with potential renumbering.
    replies(1): >>41237406 #
    48. xp84 ◴[] No.41212729[source]
    Something you may find helpful: I use a `cloudflared` tunnel to add an ssl endpoint for use outside my home, without opening any holes in the firewall. This way HA doesn’t care about it (it still works on 10.x.y.z) and your internal webhooks can still be plain http if you want.
    49. JackSlateur ◴[] No.41219613[source]
    Exactly

    And the larger the scale, to more benefits you get from avoiding internal-specific resolution.

    50. ghshephard ◴[] No.41237406{6}[source]
    I've deployed/managed over 25 million production elements in RFC4193 space. These elements ((mostly mesh networking nodes for utilities) ), by definition, should never route to the internet. (According to NERC CIP they shouldn't even route beyond the substation for distribution elements).

    Non routability was a design feature.

    I've been out of Enterprise IT for 15 years - but if I was going to do an IPv6 deployment today - I would strongly consider NAT6 prefix replacement - it offers 90% of the benefits of native IPv6 addresses, doesn't conflate "security" and "flexibility" (prefix replacement is just a straight 1:1 passthrough - globally routable) - and who want to go update all their router configs and DNS every time they change their upstream. Ugh.