There are certain circumstances where private network operators may wish to use their own domain naming scheme that is not intended to be used or accessible by the global domain name system (DNS), such as within closed corporate or home networks.
The "internal" top-level domain is reserved to provide this purpose in the DNS. Such domains will not resolve in the global DNS, but can be configured within closed networks as the network operator sees fit.
This reservation is intended for a similar purpose that private-use IP address ranges that are set aside (e.g. [RFC1918]).
If the 'private' TLD you're using suddenly becomes real, then you can ship off data, every possibly unencrypted data and connection requests to computers you do not control.
I'd like to think people learned from .dev and such. I doubt any scammer will be able to use it.
EDIT: just saw your comment about Google here
Proposed top-level domain string for private use: ".internal"
So you end up with the IETF standardising .local, because Apple was already using it, but ICANN never did much with that standardisation.
I doubt ICANN will actually touch .local, but they could. One could imagine a scheme where .local is globally registered to prevent Windows clients (who don't always support mDNS) from resolving .local domains wrong.
As a result, even if you bought steves-laptop.dev for yourself, you still wouldn't be able to run an HTTP dev environment on it, you'd need to set up HTTPS. I think that was probably a good move by Google, because otherwise it could've taken weeks for most devs to notice.
It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.
You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.
Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.
When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.
Another reason is information leakage. Having DNS records leak could actually provide potential information on things you'd rather not have public. Devs can be remarkably insensitive to the fact they are leaking information through things like domains.
(I also think that a .pseudo TLD should be made up which also cannot be assigned on the internet, but is also not for assigning on local networks either. Uusually, in the cases where it is necessary to be used, either the operating system or an application program will handle them, although the system administrator can assign them manually on a local system if necessary.)
There's already .example, .invalid, .test and .localhost; which are reserved. What usecase do you have that's not covered by one of them?
This is true, but using a regular domain name as your root does not require you to actually publish those DNS records on the Internet.
For example, say that you own the domain `example.com`. You can build a private service `foo.example.com` and only publish its DNS records within the networks where it needs to be resolved – in exactly the same way that you would with `foo.internal`.
If you ever decide that you want an Internet-facing endpoint, just publish `foo.example.com` in public DNS.
> Never attribute to malice what can better be explained by incompetence
You can't leak information if you never give access to that zone in any way. More than once I've run into well meaning developers in my time. Having a .internal inherently documents that something shouldn't be public. Whereas foo.example.com does not.
This is that but for domain names. When you need to use a domain name to refer to a host, the safest thing to do is to either use a domain name you own^Ware renting, or to use a domain name nobody will be able to "own" in the foreseeable future.
For an IP address, you might usually choose from 192.168.0.0/16 or similar reserved ranges. Your "192.168.1.1" is not the same as my "192.168.1.1", we both can use it and neither of us can "officially" own it.
For a domain name, you can use ".internal" or other similar (if uglier) reserved TLDs. Your "nas.internal" is not the same as my "nas.internal", we both can use it and neither of us can "officially" own it.
Since you're asking this question you might also be wondering how people can even use custom domains like that, and the answer is by self-hosting a DNS server, and using that as a DNS server instead of a public one (so you'd use your self-hosted server instead of, say, "8.8.8.8"). Then you configure your DNS server so that whenever someone requests "google.com" it does "the normal thing", but when someone requests "nas.internal" it returns whatever IP address you want.
> This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local.
https://datatracker.ietf.org/doc/html/rfc6762
Applications can/will break if you attempt to use .local outside of mDNS (such as systemd-resolved). Don't get upset when this happens.
Interesting fact: RFC 6762 predates Kubernetes (one of the biggest .local violators), they should really change the default domain...
Though that list apparently includes all reserved names, not only those reserved for non-public use.
[1]: https://newgtlds.icann.org/en/applicants/agb
[2]: https://newgtlds.icann.org/sites/default/files/guidebook-ful...
That assumes you are able to pay to rent a domain name, and keep paying for it, and that you are reasonably sure that the company you're renting it from is not going to take it away from you because of a selectively-enforced TOS, and that you are reasonably sure that both yourself and your registrar are doing anything possible to avoid getting your account compromised (resulting in your domain being transferred to someone else's and probably lost forever unless you can take legal action).
So it might depend on your threat model.
Also, a good example, and maybe the main reason for this specific name instead of other proposals, is that big corps are already using it (e.g. DNS search domains in AWS EC2 instances) and don't want someone else to register it.
Using a publicly valid domain offers a number of benefits, like being able to use a free public CA like Lets Encrypt. Every machine will trust your internal certificates out of the box, so there is minimal toil.
Last year I built getlocalcert [1] as a free way to automate this approach. It allows you to register a subdomain, publish TXT records for ACME DNS certificate validation, and use your own internal DNS server for all private use.
.invalid means that a domain name is required but a valid name should not be used; for example, a false email address in a "From:" header in Usenet, to indicate that you cannot send email to the author in this way.
.test is for a internal testing use, of DNS and other stuff.
.localhost is for identifying the local computer.
.internal is (presumably) for internal use in your own computer and local network, when you want to assign domain names that are for internal use only.
.pseudo is for other cases that do not fit any of the above, when a pseudo-TLD which is not used as a usual domain name, is required for a specialized use by a application, operating system, etc. You can then assign subdomains of .pseudo for specific kind of specialized uses (these assignments will be specific to the application or otherwise). Some programs might treat .pseudo (or some of its subdomains) as a special case, or might be able to be configured to do so.
(One example of .pseudo might be if you want to require a program to use only version 4 internet or only version 6 internet, and where this must be specified in the domain name for some reason; the system or a proxy server can then handle it as a special case. Other examples might be in some cases, error simulations, non-TCP/IP networks, specialized types of logging or access restrictions, etc. Some of these things do not always need to be specified as a domain name; but, in some cases they do, and in such cases then it is helpful to do so.)
I've been on the other end of the business scale for the past decade, mostly working for SMBs like hedge funds.
That made me a huge private DNS hater. So much trouble for so little security gain.
Still, it seems common knowledge is to use private DNS for internal apps, AD and such, LAN hostnames and likes.
I've been using public DNS exclusively everywhere I've worked and I always feel like it's one of the best arch decisions I'm bringing to the table.
It is. See §2.2.1.2.1, "Reserved Names", of ICANN's gTLD Applicant Guidebook:
* https://newgtlds.icann.org/sites/default/files/guidebook-ful...
It's much the same reason why some very large IPv6 services deploy some protected IPv6 space in RFC4193 FC::/7 space. Of course you have firewalls. And of course you have all sorts of layers of IDS and air-gaps as appropriate. But, if by design you don't want to make this space reachable outside the enterprise - the extra steps are a belt and suspenders approach.
So, even if I mess up my firewall rules and do leak a critical control point: FD41:3165:4215:0001:0013:50ff:fe12:3456 - you wouldn't be able to route to it anyways.
Same thing with .internal - that will never be advertised externally.
One of the (relatively few) things that frustrate me about GKE is the integration with GCP IAP and k8 gateways - it's a separate resource to the http route and if you fail to apply it, or apply one with invalid configuration then it fails open.
I'd much prefer an interface where I could specify my intention next to the route and have it fail atomically and/or fail closed
Made worse by the fact phone OSes have made it very difficult to install CAs.
In this case, foo.internal cannot represent a publicly accessible domain, much like 10.x.x.x cannot represent a publicly routable IP address.
No matter how badly you misconfigure things, you are still protected from exposure. Sometimes it's really valuable.
I think ccTLDs are restricted to two letter codes even if the country of Internia were to be be founded. The only exceptions I can think of are the localized names (.台湾 and 中国 for countries like Taiwan and China) which are technically encoded as .xn--kprw13d and .xn--fiqs8s. Pakistan's پاکستان. is the first ccTLD I've seen that's more than two visual characters when rendered (with the added bonus of being right-to-left to make URL rendering a tad more complex) so for Internia to claim .intern as a ccTLD, they'd probably need a special script.
The automated setup probably isn't very secure, though. Anyone can register any .local name on the network, so spoofing hostnames becomes very easy once you get access to any device on the network. Send a fax with a bad JPEG and suddenly your office printer becomes xvilo.local, and the ACME server has no way to determine that it's not.
That means you probably need to deal with manual certificate generation, manually renewing your certificates every two years (and, if you're like me, forgetting to before they expire).
See for instance the trouble with AVM's fritz.box domain, which was used by their routers by default, then .box wasade an TLD and AVM was too late to register it.
Of course, this is a bad idea, but it does allow you to avoid the "rent".
https://datatracker.ietf.org/doc/html/rfc6761#section-3
And ICANN is bound by the IETF/ICANN Memorandum of Understanding Concerning the Technical Work of the IANA, which prevents it from usurping that jurisdiction:
Let's start with the obvious; wifi. If you're visiting a company and ask the receptionist for the wifi password you'll likely get it.
Next are eternity ports. Sitting waiting in a meeting room, plug your laptop into the ethernet port and you're in.
And of course it's not just hardware, any software running on any machine makes the LAN just as vulnerable.
Sure, you can design a LAN to be secure. You can make sure there's no way to get onto it. But the -developer- and -network maintainer- are 2 different guys, or more likely different departments. As a developer are you convinced the LAN will be as secure in 10 years as it is today? 5 years? 1 year after that new intern arrives and takes over maintainence 6 weeks in?
What starts out as "minimal private VPC" grows, changes, is fluid. Treating it as secure today is one thing. Trusting it to remain secure 10 years from now is another.
In 99.9% of cases your LAN traffic should be secure. This us the message -developers- need to hear. Don't rely on some other department to secure your system. Do it yourself.
For example, you won't be able to run internal videocalls (no access to webcams!), or a web page able to scan QR codes.
Here's the full list:
* https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
A true hassle for internal testing between hosts, to be honest. I just cannot run an in-development video app on my PC and connect from a phone or laptop to do some testing, without first worrying about certs at a point in development where they are superfluous and a loss of time.
No one really bothered until it was revealed that organisations like the NSA were exfiltrating unencrypted internal traffic from companies like Google with programs like PRISM.
[1] https://crt.sh/
This kind of theater actively harms your organization's security, not helps it. Do people not do risk analysis anymore?
Well sure you can. You expose your internal DNS servers to the internet, or use the same DNS servers for both and they're on the internet. The root servers are not going to delegate a request for .internal to your nameservers, but anybody can make the request directly to your servers if they're publicly accessible.
Picking some reasonable best practices like using https everywhere for the sake of maintaining a good security posture doesn't mean that you're "not doing risk analysis".
.dev is great; even if Google's motives were evil-truistic; and, *.development should be among the Reserved, Internet Use only.
The abbreviated vs verbose TLD name is consistence.
There aren't any folks more appreciable than consistency then the RFC goons.
When someone embeds https://test.internal with a cert validation turned off (rather then fingerprint pinning or setting up an internal CA) in their mobile application that client will greedily accept whatever response is provided by their local resolver... Correct or malicious.
The advantage is that I can run real letsencrypt certs for services in my house, which is nicer than having to agree to self signed cert warnings or otherwise having my browser nag me about plaintext passwords/etc.
If anyone cares about the details, I run an nginx instance on port 80 through an ipv6 address which I allow through my network firewall (no NAT, so I don’t have to burn my only incoming ipv4 port 80 for this, although I block that anyway) and let certbot manage its configs. Wildcard external dns pointing AAAA records to said v6 address. The certbot vhost just renders an empty 404 for all requests except for the ACME challenges, so there’s nothing being “leaked” except generic 404 headers. I get certs dumped to my nginx config dir, then from there I use them for an internal-only reverse proxy listening on my local subnet, for all my internal stuff. The only risk is if I mess up the config and expose the RP to the internet, but so far I haven’t managed to screw it up.
Ref: https://www.icann.org/en/board-activities-and-meetings/mater...
So there are a bunch of cases where we only want the second (simpler, lower-risk) case, but we have to incur all the annoyance and risk and locked-down-ness of the first use-case.
Presumably, ICANN, like any other committee, is not interested in self-castration. Which is what would happen if they challenged Apple.
ICANN could do anything with enough rule changes. And then everyone will ignore them.
I guess my toaster is going to hack my printer someday, but at least it won’t get into my properly-secured laptop that makes no assumptions the local network is “safe”.
This way you can ensure you as the developer have full control over your applications' network communication; by requiring client certificates issued by a CA you control, you can assert there is no MITM even if a sysadmin, user, or malware tries to install a proxy root CA on the system.
Finally, you can add binary obfuscation / anticheat mechanisms used commonly in video games to ensure that even if someone is familiar with the application in question they cannot alter the certificates your application will accept.
Lots of e.g. mobile banking apps, etc. do this for maximal security guarantees.
Running certificate authority is one of those a minute to learn, lifetime to master scenarios.
You are often trading “people can sniff my network scenario” to a “compromise the CA someone setup 10 years ago that we don’t touch” scenario.
Don't believe the hype. Remember the smiley from "SSL added and removed here"
https://blog.encrypt.me/2013/11/05/ssl-added-and-removed-her...
Lots of apps use the anticheat/obfuscation mechanisms added by mobile apps are also trivial to bypass using instrumentation - ie frida codeshare. I know you aren’t implying that people should use client side controls to protect an app running on a device and an environment that they control, but in my experience even some technical folk will try and to do this
I have my domain's DNS on Cloudflare, so I can use DNS verification with Let's Encrypt to get myself a proper certificate that works on all of my devices. Then I just have Cloudflare DNS set up with a bunch of CNAME records to .internal addresses.
For example, if I needed to set up a local mail server, I'd set mail.cottagecheese.download to have a CNAME record pointing to localserver.internal and then have my router resolve localserver.internal to my actual home server's IP address. So if I punch in https://mail.cottagecheese.download in my browser, the browser resolves that to localserver.internal and then my router resolves that to 10.x.x.x/32, sending me to my internal home server that greets me with a proper Let's Encrypt certificate without any need to expose my internal IP addresses.
Windows doesn't seem to like my CNAME-based setup though. Every time I try to use them, it's a diceroll if it actually works.
So it's not a rock-hard guarantee that traffic to localhost never leaves your system. It would be unconventional and uncommon for it to, though, except for the likes of us who like to ssh-tunnel all kinds of things on our loopback interfaces :-)
The sweet spot of security vs convenience, in the case of browsers and awarding "secure origin status" for .internal, could perhaps be on a dynamic case by case basis at connect time:
- check if it's using a self-signed cert - offer TOFU procedure if so - if not, verify as usual
Maaaaybe check whether the connection is to an RFC1918 private range address as well. Maybe. It would break proxying and tunneling. But perhaps that'd be a good thing.
This would just be for browsers, for the single purpose of enabling things like serviceworkers and other "secure origin"-only features, on this new .internal domain.
All our apps had to auto-disable pinning less than a year after the build date, because if the user hadn't updated the app by the time we had to renew all our certs... they'd be locked out.
Also dealt with the fallout from a lovely little internet-of-things device that baked cert pinning into the firmware, but after a year on store shelves the clock battery ran out, so they booted up in 1970 and decided the pinned certs wouldn't become valid for ~50 years :D
The SSAC's recommendation is to only use .INTERNAL if using a publicly registered domain name is not an option. See Section 4.2.
https://itp.cdn.icann.org/en/files/security-and-stability-ad...
You will also increase the risk that your already understaffed ops-team messes up and creates even worse exposure or outages, while they are trying to figure out what ssl-keygen does.
Security by obscurity while making the actual security of endpoints weaker is not an argument in favour of wildcards...
That's because the purpose of certificate pinning is to protect software from the user. Letting you supply your own certificates would defeat the purpose of having them.
If they're so worried about users getting duped to activate the insecure mode, they could at least make it a compiler option and provide an entirely separate download in a separate place.
Also, don't get me started on HSTS and HSTS preloading making it impossible to inspect your own traffic with entities like Google. It's shameful that Firefox is even more strict about this idiocy than Chrome.
As mentioned, some browser features are HTTPS only. You get security warnings on HTTP. Many tools now default to HTTPS by default - like newer SQL Server drivers. Dev env must resemble prod very closely so having HTTP in DEV and HTTPS in prod is asking for pain and trouble. It forces you to have some kind of expiration registry/monitoring and renewal procedures. And you happen to go throught dev env first and gain confidence and then prod.
Then there are systems where client certificate is mandatory and you want to familiarize yourself already in dev/test env.
Some systems even need additional configuration to allow OAuth via HTTP and that makes me feel dirty thus I rather not do it. Why do it if PROD won't have HTTP? And if one didn't know such configuration must be done, you'd be troubleshooting that system and figuring out why it doesn't work with my simple setup?
Yeah, we have internal CA set up, so issuing certs are pretty easy and mostly automated and once you go HTTPS all in, you get the experience why/how things work and why they may not and got more experience to troubleshoot HTTPS stuff. You have no choice actually - the world has moved to TLS secured protocols and there is no way around getting yourself familiar with security certificates.
https://www.eff.org/pages/upstream-prism
These kind of risks are obvious, real, and extensively documented stuff. I can't imagine why anyone serious about improving security for everyone would want to downplay and ridicule it.
As a contractor, I'll create a per-client VM for each contract and install any client network CAs only within that VM.
I guess you can use a pattern like {human name}.{random}.internal but then you lose memoribility.
Software that isn't like that is in a minority, and most of it is only used to build software that is like that.
Seriously, your statement is demonstrably wrong. That's exactly the sort of traffic the NSA actively seeks to exploit.
It's extremely user-hostile since Android has a separate user store for self-signed CAs, but apps are free to ignore the user store and only accept the system store. I think by default only like, Chrome accepts the user store?
Visit other .internal site -> uses TLS cert NOT signed by root CA that is preloaded on your device -> certificate error, and cannot be bypassed due to HSTS.
My preferred reading is .com for commonlymisinterpretedbypeoplewhodonotreadrfcsbutitdoesnotmatterintheslightest, which is a Welsh word meaning "oddly shaped sheep".
HA allows you to use a self-signed cert, but if you turn on HTTPS, your webhook endpoints must also use HTTPS with that cert. The security camera doesn't allow me to mess with its certificate store, so it's not going to call a webhook endpoint with a self-signed/untrusted root cert.
Sure, I could probably run a HTTP->HTTPS proxy that would ignore my cert, but it all starts to feel like a massive kludge to be your own CA. Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.
Trying to save a few bucks by not buying a vanity domain for internal/test stuff just isn't worth the effort. Most systems (HA included) support ACME clients to get free certs, and I guess for IoT stuff, you could still do one-off self-signed certs with long expiration periods, since there's no way to automate rotation of wildcards for LE.
and the recommendation is to simply do "*.internal.example.com" with LetsEncrypt (using DNS-01 validation), so every client gets the correct CA cert "for free"
...
obviously if you want mTLS, then this doesn't help much. (but still, it's true that using a public domain has many advantages, as having an airgapped network too)
Depending on your threat model, I'm not sure that's true. Encryption without verification prevents a passive observer from seeing the content of a connection, but does nothing to prevent an active MITM from decrypting it.
edit: ah, unfortunately it's not really standard, just a grassroots effort https://ungleich.ch/u/projects/ipv6ula/
However DNS challenge allow for you to map an internal address to an IP number. The only real information that leaks is the subnet address of my LAN. And given the choice of that or unencrypted traffic I'll take that all day long.
Their original application for .dev was written to "ensure its reserved use for internal projects - since it is a common internal TLD for development" - then once granted a few years later they started selling domains with it.
** WITH HSTS PRELOADING ** ensuring that all those internal dev sites they were aware of would break.
.internal is just admitting there's only so many times we can repeat the same mistake before we start to look silly.
All subdomains which are meant for public consumption are at the first level, like www.example.com or blog.example.com, and the ones I use internally (or even privately accessible on the internet, like xmpp.something.example.com) are not up for discovery, as no public records exist.
Everything at *.something.example.com, if it is supposed to be privately accessible on the internet, is resolved by a custom DNS server which does not respond to `ANY`-requests and logs every request. You'd need to know which subdomains exist.
something.example.com has an `NS`-record entry with the domain name which points to the IP of that custom DNS server (ns.example.com).
The intranet also has a custom DNS server which then serves the IPs of the subdomains which are only meant for internal consumption.
These local TLDs should IMO be used on all home routers, it fixes a lot of problems.
If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.
Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.
Also, p. sure I grew up playing wc3 w you?
Whose computer is this? I guess the machine I purchased doesn't belong to me, but instead belongs to the developer of the browser, who has absolutely no idea what I'm trying to do, what my background is and qualifications and what my needs are? It seems absurd to give that person the ultimate say over me on my system, especially if they're going to give me some BS about protecting me from myself for my own good or something like that. Yet, that is clearly the direction things are headed.
The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)
I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.
You have:
- employees at ISPs
- employees at the hosting company
- accidental network misconfigurations
- one of your own compromised machines now part of a ransomware group
- the port you thought was “just for internal” that a dev now opens for some quick testing from a dev box
Putting anything in open comms is one of the dumbest things you can do as an engineer. Do your job and clean that shit up.
It’s funny you mention risk analysis, plaintext traffic is one of the easiest things to compromise.
dnsmasq has this feature. I think it’s commonly available in alternative router firmware.
On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.
There are undoubtably other options, but these are the two I’ve worked with.
You can't just add the CA to system trust stores on each device, because some applications, notably browsers and java, use their own trust stores, you have to add it to.
You also can't scope the CA to just .internal, which means in a BYOD environment, you have to require your employees to trust you not to sign certs for other domains.
And then there is running the CA itself. Which is more difficult than using let's encrypt.
Trusted computing is similar, too. It's a huge win for the user in terms of security, as long as the user owns the master key and can upload their own signatures. If not, then it suddenly becomes a very powerful form of control.
The more fundamental issue is the distinction between "user" and "owner" of a computer - or its component, or a piece of software - as they're often not the same people. Security technologies assert and enforce control of the owner; whether that ends up empowering or abusive depends on who the owners are, and why.
https://github.com/kubernetes/ingress-nginx/issues/1681#issu...
> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.
That was 5-ish years ago though. I hope there are better ways than the cert hack now.
The secure context spec [1] addresses this-- localhost should only be considered potentially trustworthy if the agent complies with specific name resolution rules to guarantee that it never resolves to anything except the host's loopback interface.
[1] https://w3c.github.io/webappsec-secure-contexts/#localhost
Regarding the certificates, if you don’t want to set up stuff on clients manually, the only drawback is the use of a wildcard certificate (which when compromised can be used to hijack everything under something.example.com).
An intermediate CA with name constraints (can only sign certificates with names under something.example.com) sounds like a better solution if you deem the wildcard certificate too risky. Not sure which CA can issue it (letsencrypt is probably out) and how well supported it is
X.509 trust just doesn't work if multiple entities can get a cert for the same CN under the same root-of-trust, as then one of the issuees can impersonate the other.
If public issuers would sign .internal certs, then presuming you have access to a random org's intranet, you could MITM any machine in that org by first setting up your own intranet with its own DNS, creating .internal records in it, getting a public issuer to issue certs for those domains, and then using those certs to impersonate the .internal servers in the org-intranet you're trying to attack.
> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation
So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.
If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.
I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.
I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.
EDIT: Looks like I misunderstood what Google having .dev meant in the above discussion; domains using it are available to purchase through their registrar (or more precisely resellers since I guess they don't sell directly anymore)
All of this would most likely need to be an inside job with some relatively big criminal energy. At that level you'd probably also have other attack vectors which you could consider.
Anecdotally, I've seen name constraints kick in for both Firefox and Chrome on a Linux distro, but I can't comment more broadly.
> Commercial, any commercial related domains meeting the second level requirements.
Often? Only really in the case of a corporate computer. But Android locks these things down for everyone. In fact corporate owners can do things normal users can't.
For example I've heard (not confirmed) that with a Knox license you can add root CAs on Samsung. I don't think it's still possible with other MDMs or other vendors.
That is very much not true. Most corporate networks I've ever been on trust the internal network. Whether or not you think they should, they do.
Encrypting all network traffic between endpoints does nothing to actively harm security.
On the contrary, that's the more common case. It's the case with any computer at work (unless you're IT dept), in any work - there's hardly a job now that doesn't have one interacting with computers in some form or fashion, and those computers are very much not employee-owned. Same is the case in school setting, and so on. About the only time you can expect to own a computer is when you bought it yourself, with your own cash. The problem is, even when you do, everything is set up these days to deny you your ownership rights.
Although, ideally, it would be possible to limit the scope of a CA when adding it to the trust store, and not have to rely on the creator of the CA setting the right parameters.
And the larger the scale, to more benefits you get from avoiding internal-specific resolution.
I have been using .l personally for a couple of years and it works fine except Chrome won't recognize it as a tld and would start a google search. Once it is visited a couple of times, it autocompletes it as a webpage so it's quite usable afterall.
Non routability was a design feature.
I've been out of Enterprise IT for 15 years - but if I was going to do an IPv6 deployment today - I would strongly consider NAT6 prefix replacement - it offers 90% of the benefits of native IPv6 addresses, doesn't conflate "security" and "flexibility" (prefix replacement is just a straight 1:1 passthrough - globally routable) - and who want to go update all their router configs and DNS every time they change their upstream. Ugh.