Most active commenters
  • 8organicbits(10)
  • Arch-TK(4)
  • TeMPOraL(4)
  • layer8(4)
  • thayne(4)
  • bruce511(3)
  • kortilla(3)
  • wkat4242(3)
  • Helmut10001(3)
  • tsimionescu(3)

←back to thread

563 points joncfoo | 124 comments | | HN request time: 2.88s | source | bottom
1. 8organicbits ◴[] No.41205729[source]
My biggest frustration with .internal is that it requires a private certificate authority. Lots of organizations struggle to fully set up trust for the private CA on all internal systems. When you add BYOD or contractor systems, it's a mess.

Using a publicly valid domain offers a number of benefits, like being able to use a free public CA like Lets Encrypt. Every machine will trust your internal certificates out of the box, so there is minimal toil.

Last year I built getlocalcert [1] as a free way to automate this approach. It allows you to register a subdomain, publish TXT records for ACME DNS certificate validation, and use your own internal DNS server for all private use.

[1] https://www.getlocalcert.net/

replies(12): >>41206030 #>>41206106 #>>41206231 #>>41206513 #>>41206719 #>>41206776 #>>41206828 #>>41207112 #>>41208240 #>>41208353 #>>41208964 #>>41210736 #
2. xer0x ◴[] No.41206030[source]
Oh neat, thanks for sharing this idea
3. TheRealPomax ◴[] No.41206106[source]
I'm pretty sure that if letsencrypt localhost certs work, they'll work fine with .internal too?
replies(1): >>41206143 #
4. merb ◴[] No.41206143[source]
let’s encrypt does not support certain for localhost.
5. mschuster91 ◴[] No.41206231[source]
> Lots of organizations struggle to fully set up trust for the private CA on all internal systems.

Made worse by the fact phone OSes have made it very difficult to install CAs.

replies(1): >>41206466 #
6. booi ◴[] No.41206466[source]
And in on some platforms and configurations, impossible.

Same with the .dev domain

replies(2): >>41206630 #>>41206687 #
7. yjftsjthsd-h ◴[] No.41206513[source]
Do you mean to say that your biggest frustration with HTTPS on .internal is that it requires a private certificate authority? Because I'm running plain HTTP to .internal sites and it works fine.
replies(6): >>41206577 #>>41206657 #>>41206669 #>>41208198 #>>41208358 #>>41210486 #
8. lysace ◴[] No.41206577[source]
There's some every packet shall be encrypted, even in minimal private VPCs lore going on. I'm blaming PCI-DSS.
replies(5): >>41206652 #>>41206686 #>>41206797 #>>41207668 #>>41207971 #
9. jhardy54 ◴[] No.41206630{3}[source]
.dev isn’t a TLD for internal use though, do you have the same problem when you use .test?
replies(1): >>41209682 #
10. bruce511 ◴[] No.41206652{3}[source]
The big problem with running unencrypted HTTP on a LAN is that it's terribly easy for (most) LANs to be compromised.

Let's start with the obvious; wifi. If you're visiting a company and ask the receptionist for the wifi password you'll likely get it.

Next are eternity ports. Sitting waiting in a meeting room, plug your laptop into the ethernet port and you're in.

And of course it's not just hardware, any software running on any machine makes the LAN just as vulnerable.

Sure, you can design a LAN to be secure. You can make sure there's no way to get onto it. But the -developer- and -network maintainer- are 2 different guys, or more likely different departments. As a developer are you convinced the LAN will be as secure in 10 years as it is today? 5 years? 1 year after that new intern arrives and takes over maintainence 6 weeks in?

What starts out as "minimal private VPC" grows, changes, is fluid. Treating it as secure today is one thing. Trusting it to remain secure 10 years from now is another.

In 99.9% of cases your LAN traffic should be secure. This us the message -developers- need to hear. Don't rely on some other department to secure your system. Do it yourself.

replies(4): >>41207245 #>>41207321 #>>41207535 #>>41212678 #
11. j1elo ◴[] No.41206657[source]
Try running anything more complicated than a plain and basic web server! See what happens if you attempt to serve something that browsers deem to require a mandatory "Secure Context", so they will reject running it when using HTTP.

For example, you won't be able to run internal videocalls (no access to webcams!), or a web page able to scan QR codes.

Here's the full list:

* https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

A true hassle for internal testing between hosts, to be honest. I just cannot run an in-development video app on my PC and connect from a phone or laptop to do some testing, without first worrying about certs at a point in development where they are superfluous and a loss of time.

replies(1): >>41206727 #
12. this_user ◴[] No.41206669[source]
A lot of services default to HTTPS. For instance, try setting up an internal Gitlab instance with runners, pipelines, and package/container registries that actually works. It's an absolute nightmare, and some things outright won't work. And if you want to pull images from HTTP registries with Docker, you have enable that on every instance for each registry separately. You'd be better off registering a real domain, using Let's Encrypt with the DNS challenge, and setting up an internal DNS for your services. That is literally an order of magnitude less work than setting up HTTP.
13. kortilla ◴[] No.41206686{3}[source]
Hoping datacenter to datacenter links are secure is how the NSA popped Google.

Turn on crypto, don’t be lazy

replies(1): >>41206832 #
14. kortilla ◴[] No.41206687{3}[source]
.dev is a real domain
15. wkat4242 ◴[] No.41206719[source]
The problem with internal CAs is also that it's really hard to add them on some OSes now. Especially on android since version 7 IIRC, you can no longer get certs into the system store, and every app is free to ignore the user store (I think it was even the default to ignore it). So a lot of apps will not work with it.
replies(2): >>41207082 #>>41208303 #
16. akira2501 ◴[] No.41206727{3}[source]
localhost is a secure context. so.. presumably we're just waiting for .internal to be added to the white list.
replies(4): >>41206781 #>>41208009 #>>41208879 #>>41208887 #
17. jacooper ◴[] No.41206776[source]
This is why I'm using a FQDN for my home lab, I'm not going to setup a private CA for this, I can just use ACME-dns and get a cert that will work everywhere, for free!
18. JonathonW ◴[] No.41206781{4}[source]
Unlikely. Localhost can be a secure context because localhost traffic doesn't leave your local machine; .internal names have no guarantees about where they go (not inconceivable that some particularly "creative" admin might have .internal names that resolve to something on the public internet).
replies(1): >>41208203 #
19. yarg ◴[] No.41206797{3}[source]
Blame leaked documents from the intelligence services.

No one really bothered until it was revealed that organisations like the NSA were exfiltrating unencrypted internal traffic from companies like Google with programs like PRISM.

replies(1): >>41208794 #
20. prussian ◴[] No.41206828[source]
Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas. I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.

[1] https://crt.sh/

replies(3): >>41206969 #>>41208715 #>>41208926 #
21. otabdeveloper4 ◴[] No.41206832{4}[source]
Pretty sure state-level actors sniffing datacenter traffic is literally the very last of your security issues.

This kind of theater actively harms your organization's security, not helps it. Do people not do risk analysis anymore?

replies(5): >>41206889 #>>41208432 #>>41208868 #>>41210156 #>>41213945 #
22. shawnz ◴[] No.41206889{5}[source]
Taking defense in depth measures like using https on the local network is "theatre" that "actively harms your organization's security"? That seems like an extreme opinion to me.

Picking some reasonable best practices like using https everywhere for the sake of maintaining a good security posture doesn't mean that you're "not doing risk analysis".

replies(1): >>41208731 #
23. Helmut10001 ◴[] No.41206969[source]
Just use wildcard certs and internal subdomains remain internal information.
replies(2): >>41207432 #>>41210084 #
24. Terr_ ◴[] No.41207082[source]
Speculating a bit out of my depth here, but I'm under the impression that most of those sometimes-configurable OS-level CA lists are treated as "trust anything consistent with this data", as opposed to "only trust this CA record for these specific domain-patterns because that's the narrow purpose I chose to install it for."

So there are a bunch of cases where we only want the second (simpler, lower-risk) case, but we have to incur all the annoyance and risk and locked-down-ness of the first use-case.

replies(1): >>41208549 #
25. seb1204 ◴[] No.41207112[source]
https://letsencrypt.org/ does not work?
replies(1): >>41207166 #
26. francislavoie ◴[] No.41207166[source]
No, that's a public CA. No public domain registrars will be allowed to sell .internal domains so no public DNS servers will resolve .internal and that's a requirement for let's encrypt to validate that you control the domain. So you must use a private CA (one that you create yourself, with something like Smallstep, Caddy, or OpenSSL commands) and you'll need to install that CA's root certificate on any devices you want to be able to connect to your server(s) that use .internal
27. gorgoiler ◴[] No.41207245{4}[source]
Well said. I used to be of the mindset that if I ran VLANs I could at least segregate the good guys from the evil AliExpress wifi connected toasters. Now everything feels like it could become hostile at any moment and so, on that basis, we all share the same network with shields up as if it were the plain, scary Internet. It feels a lot safer.

I guess my toaster is going to hack my printer someday, but at least it won’t get into my properly-secured laptop that makes no assumptions the local network is “safe”.

28. slimsag ◴[] No.41207321{4}[source]
Also, make sure your TLS certificates are hard-coded/pinned in your application binary. Just like the network, you really cannot trust what is happening on the user's system.

This way you can ensure you as the developer have full control over your applications' network communication; by requiring client certificates issued by a CA you control, you can assert there is no MITM even if a sysadmin, user, or malware tries to install a proxy root CA on the system.

Finally, you can add binary obfuscation / anticheat mechanisms used commonly in video games to ensure that even if someone is familiar with the application in question they cannot alter the certificates your application will accept.

Lots of e.g. mobile banking apps, etc. do this for maximal security guarantees.

replies(4): >>41207549 #>>41207722 #>>41208226 #>>41208246 #
29. qmarchi ◴[] No.41207432{3}[source]
There's a larger risk that if someone breaches a system with a wildcard cert, then you can end up with them being able to impersonate _every_ part of your domain, not just the one application.
replies(3): >>41207849 #>>41208648 #>>41209814 #
30. Spooky23 ◴[] No.41207535{4}[source]
The big issue with encrypted HTTP on the local LAN is that you’re stuck running a certificate authority, ignoring TLS validation, or exposing parts of your network in the name of transparency.

Running certificate authority is one of those a minute to learn, lifetime to master scenarios.

You are often trading “people can sniff my network scenario” to a “compromise the CA someone setup 10 years ago that we don’t touch” scenario.

replies(1): >>41209487 #
31. PokestarFan ◴[] No.41207549{5}[source]
At some point you have to wonder if your app even matters that much.
replies(1): >>41209507 #
32. laz ◴[] No.41207668{3}[source]
Exactly what an NSA puppet account would say!

Don't believe the hype. Remember the smiley from "SSL added and removed here"

https://blog.encrypt.me/2013/11/05/ssl-added-and-removed-her...

replies(1): >>41219942 #
33. frogsRnice ◴[] No.41207722{5}[source]
Pinning is very complex, there is always the chance that you forget to update the pins and perform a denial of service against your own users. At the point where the device itself is compromised, you can’t really assert to anything. Furthermore, there is always the risk that your developers implement pinning incorrectly and introduce a chain validation failure.

Lots of apps use the anticheat/obfuscation mechanisms added by mobile apps are also trivial to bypass using instrumentation - ie frida codeshare. I know you aren’t implying that people should use client side controls to protect an app running on a device and an environment that they control, but in my experience even some technical folk will try and to do this

34. politelemon ◴[] No.41207849{4}[source]
It's the opposite - there is a risk, but not a larger risk. Environment traversal is easier through a certificate transparency log, there is almost zero work to do. Through a wildcard compromise, the environment is not immediately visible. It's much safer to do wildcard for certs for internal use.
replies(1): >>41208276 #
35. unethical_ban ◴[] No.41207971{3}[source]
That's some "it's okay to keep my finger on the trigger when the gun is unloaded" energy.
36. Too ◴[] No.41208009{4}[source]
No. The concept of a DMZ died decades ago. You could still be MITM within your company intranet. Any system designed these days should follow zero-trust principles.
replies(2): >>41208347 #>>41213871 #
37. IshKebab ◴[] No.41208198[source]
A lot of modern web features now require HTTPS.
38. Wicher ◴[] No.41208203{5}[source]
One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).

So it's not a rock-hard guarantee that traffic to localhost never leaves your system. It would be unconventional and uncommon for it to, though, except for the likes of us who like to ssh-tunnel all kinds of things on our loopback interfaces :-)

The sweet spot of security vs convenience, in the case of browsers and awarding "secure origin status" for .internal, could perhaps be on a dynamic case by case basis at connect time:

- check if it's using a self-signed cert - offer TOFU procedure if so - if not, verify as usual

Maaaaybe check whether the connection is to an RFC1918 private range address as well. Maybe. It would break proxying and tunneling. But perhaps that'd be a good thing.

This would just be for browsers, for the single purpose of enabling things like serviceworkers and other "secure origin"-only features, on this new .internal domain.

replies(3): >>41208352 #>>41210465 #>>41210508 #
39. swiftcoder ◴[] No.41208226{5}[source]
In practice pinning tends to be very "best effort", if not outright disadvantageous.

All our apps had to auto-disable pinning less than a year after the build date, because if the user hadn't updated the app by the time we had to renew all our certs... they'd be locked out.

Also dealt with the fallout from a lovely little internet-of-things device that baked cert pinning into the firmware, but after a year on store shelves the clock battery ran out, so they booted up in 1970 and decided the pinned certs wouldn't become valid for ~50 years :D

40. AndyMcConachie ◴[] No.41208240[source]
If you read the document that originally lead the ICANN Board to reserve .INTERNAL (SAC113) you will find this exact sentiment.

The SSAC's recommendation is to only use .INTERNAL if using a publicly registered domain name is not an option. See Section 4.2.

https://itp.cdn.icann.org/en/files/security-and-stability-ad...

41. Too ◴[] No.41208246{5}[source]
This is way overkill, unless you are making a nuclear rocket launch application. If you can not trust the system root CA, the whole internet breaks down.

You will also increase the risk that your already understaffed ops-team messes up and creates even worse exposure or outages, while they are trying to figure out what ssl-keygen does.

42. ixfo ◴[] No.41208276{5}[source]
Environment visibility is easy to get. If you pwn a box which has foo.internal, you can now impersonate foo.internal. If you pwn a box which has *.internal, you can now impersonate super-secret.internal and everything else, and now you're a DNS change away from MITM across an entire estate.

Security by obscurity while making the actual security of endpoints weaker is not an argument in favour of wildcards...

43. thaumasiotes ◴[] No.41208303[source]
> The problem with internal CAs is also that it's really hard to add them on some OSes now. Especially on android since version 7 IIRC

That's because the purpose of certificate pinning is to protect software from the user. Letting you supply your own certificates would defeat the purpose of having them.

replies(3): >>41208737 #>>41208743 #>>41210474 #
44. tsimionescu ◴[] No.41208347{5}[source]
Sure, but people still need to test things, and HTTPS greatly complicates things. Browsers' refusal to make it poasible to run anything unencrypted when you know what you're doing is extremely annoying, and has caused significant losses of productivity throughout the industry.

If they're so worried about users getting duped to activate the insecure mode, they could at least make it a compiler option and provide an entirely separate download in a separate place.

Also, don't get me started on HSTS and HSTS preloading making it impossible to inspect your own traffic with entities like Google. It's shameful that Firefox is even more strict about this idiocy than Chrome.

replies(2): >>41208716 #>>41210024 #
45. im3w1l ◴[] No.41208352{6}[source]
localhost is pretty special in that it's like the only domain typically defined in a default /etc/hosts.
46. 7bit ◴[] No.41208353[source]
> My biggest frustration with .internal is that it requires a private certificate authority

So don't use it?

47. jve ◴[] No.41208358[source]
I consider HTTPS to be easier to run - you get less trouble in the end.

As mentioned, some browser features are HTTPS only. You get security warnings on HTTP. Many tools now default to HTTPS by default - like newer SQL Server drivers. Dev env must resemble prod very closely so having HTTP in DEV and HTTPS in prod is asking for pain and trouble. It forces you to have some kind of expiration registry/monitoring and renewal procedures. And you happen to go throught dev env first and gain confidence and then prod.

Then there are systems where client certificate is mandatory and you want to familiarize yourself already in dev/test env.

Some systems even need additional configuration to allow OAuth via HTTP and that makes me feel dirty thus I rather not do it. Why do it if PROD won't have HTTP? And if one didn't know such configuration must be done, you'd be troubleshooting that system and figuring out why it doesn't work with my simple setup?

Yeah, we have internal CA set up, so issuing certs are pretty easy and mostly automated and once you go HTTPS all in, you get the experience why/how things work and why they may not and got more experience to troubleshoot HTTPS stuff. You have no choice actually - the world has moved to TLS secured protocols and there is no way around getting yourself familiar with security certificates.

replies(1): >>41208609 #
48. soraminazuki ◴[] No.41208432{5}[source]
NSA sniffs all traffic through various internet choke points in what's known as upstream surveillance. It's not just data center traffic.

https://www.eff.org/pages/upstream-prism

These kind of risks are obvious, real, and extensively documented stuff. I can't imagine why anyone serious about improving security for everyone would want to downplay and ridicule it.

49. 8organicbits ◴[] No.41208549{3}[source]
Yes! Context specific CA trust would be great, but AFAIK isn't possible yet. Even name constraints, which are domain name limitations a CA or intermediate cert place on itself, are slowly being supported by relevant software [1].

As a contractor, I'll create a per-client VM for each contract and install any client network CAs only within that VM.

[1] https://alexsci.com/blog/name-non-constraint/

50. 8organicbits ◴[] No.41208609{3}[source]
At my first job out of college we built an API and a couple official clients for it. The testing endpoint used self-signed certs so we had to selectively configure clients to support it. Right before product launch we caught that one of our apps was ignoring certificate verification in production too due to a bug. Ever since then I've tried to run publicly valid certificates on all endpoints to eliminate those classes of bugs. I still run into accidentally disabled cert validation doing security audits, it's a common mistake.
51. eru ◴[] No.41208648{4}[source]
Can't you have a limited wildcard?

Something like *.for-testing-only.company.com?

replies(1): >>41208724 #
52. zikduruqe ◴[] No.41208715[source]
I use https://github.com/FiloSottile/mkcert for my internal stuff.
53. the8472 ◴[] No.41208716{6}[source]
To inspect your own traffic you can use SSLKEYLOGFILE and then load it into wireshark.
replies(1): >>41208907 #
54. kevincox ◴[] No.41208724{5}[source]
Yes, but then you are putting more information into the publically logged certificate. So it is a tradeoff between scope of certificate and data leak.

I guess you can use a pattern like {human name}.{random}.internal but then you lose memoribility.

replies(2): >>41208788 #>>41208805 #
55. the8472 ◴[] No.41208731{6}[source]
I have seen people disabling all cert validation in an application because SSL was simultaneously required and no proper CA was provided for internal things. The net effect was thus that even the traffic going to the internet was no longer validated.
56. okanat ◴[] No.41208737{3}[source]
Protect the software from the user? Why are you giving them the software then?
replies(3): >>41208848 #>>41208936 #>>41208942 #
57. Arch-TK ◴[] No.41208743{3}[source]
Certificate pinning and restricting adding custom certificates to your OS except if you're using MDM are two completely unrelated things. Overriding system trust doesn't affect certificate pinning and certificate pinning is no longer recommended anyway.
replies(1): >>41209950 #
58. 8organicbits ◴[] No.41208788{6}[source]
I've considered building tools to manage decoy certificates, like it would register mail.example.com if you didn't have a mail server, but I couldn't justify polluting the cert transparency logs.
59. baq ◴[] No.41208794{4}[source]
Echelon was known about before Google was even a thing. I remember people adding Usenet headers with certain keywords. Wasn’t much, but it was honest work.
60. lacerrr ◴[] No.41208805{6}[source]
Made up problem, that approach is fine.
61. TeMPOraL ◴[] No.41208848{4}[source]
Most software is tools of control and exploitation, and remains in an adversarial relationship with its users. You give software to users to make them make money for you; you protect the software from users so they don't cut you out, or use software to do something you'd rather they don't do.

Software that isn't like that is in a minority, and most of it is only used to build software that is like that.

replies(2): >>41209321 #>>41213802 #
62. TimTheTinker ◴[] No.41208868{5}[source]
Found the NSA goon.

Seriously, your statement is demonstrably wrong. That's exactly the sort of traffic the NSA actively seeks to exploit.

63. miah_ ◴[] No.41208879{4}[source]
Years back I ran into a issue at work because somebody named their computer "localhost" on a network with automatic DNS registration. Because of DNS search path configuration it would resolve. So, "localhost" ended up resolving to something other than an address on 127.0.0.0/8! It was a fun discovery and fixed soon after I reported it.
64. TeMPOraL ◴[] No.41208887{4}[source]
Doesn't matter for mixed content, like e.g. when you run a client-side only app that happens to be loaded from a public domain over HTTPS, and want it to call out to an API endpoint running locally. HTTP won't fly. And good luck reverse-proxying it without a public CA cert either.
65. tsimionescu ◴[] No.41208907{7}[source]
Most apps don't support SSLKEYLOGFILE. OpenSSL, the most popular TLS library, doesn't support it.
replies(1): >>41211234 #
66. moontear ◴[] No.41208926[source]
I wish there was a way to remove public information such as this. Just like historical website ownership records. Maybe interesting for research purposes, but there is so much stuff in public records I don't want everyone to have access to. Should have thought about that before creating public records - but one may not be aware of all the ramifications of e.g. just creating an SSL cert with letsencrypt or registering a random domain name without privacy extensions.
67. evandrofisico ◴[] No.41208936{4}[source]
For example, to make it harder to reverse engineer the protocol between the app and the server.
68. noirscape ◴[] No.41208942{4}[source]
A lot of mobile software is just a UI around an external web API. The main reason why Android makes it difficult to get the OS to accept an external certificate (you need root for it) is because without it, you can just do a hosts hack through a vpn/dns to redirect it to your own version of that API. Which app manufacturers want to prevent since it's a really easy way to snoop on what endpoints an app is calling and to say, build your own API clone of that app (which is desirable if you're say, selfhosting an open source server clone of said software... but all the official applications are owned by the corporate branch and don't let you self-configure the domain/reduce the experience when you point it to a selfhosted domain).

It's extremely user-hostile since Android has a separate user store for self-signed CAs, but apps are free to ignore the user store and only accept the system store. I think by default only like, Chrome accepts the user store?

replies(1): >>41214844 #
69. layer8 ◴[] No.41208964[source]
I don’t understand the frustration. The use of .internal is explicitly for when you don’t want a publicly valid domain. Nobody is forcing anyone to use .internal otherwise.
replies(2): >>41209199 #>>41210387 #
70. pas ◴[] No.41209199[source]
the frustration comes when non-corporate-provisoned clients get on the .internal network and have trouble using the services because of TLS errors (or the problem is lack of TLS)

and the recommendation is to simply do "*.internal.example.com" with LetsEncrypt (using DNS-01 validation), so every client gets the correct CA cert "for free"

...

obviously if you want mTLS, then this doesn't help much. (but still, it's true that using a public domain has many advantages, as having an airgapped network too)

replies(2): >>41209307 #>>41209508 #
71. layer8 ◴[] No.41209307{3}[source]
You’re basically saying that .internal can cause frustration when it is used without good reason. Fair enough, but also not surprising. When it is used for the intended reasons though, then there’s just no other solution. It’s a trade-off between conflicting goals. “Simply do X instead” doesn’t remove the trade-off.
replies(1): >>41210211 #
72. cobbal ◴[] No.41209321{5}[source]
It's interesting that cert pinning cuts both ways though. It can also be a tool to give users power against the IT department (typically indistinguishable from malware)
replies(1): >>41210412 #
73. bruce511 ◴[] No.41209487{5}[source]
I agree that setting up a self-signed CA is hard, and harder to keep going.

However DNS challenge allow for you to map an internal address to an IP number. The only real information that leaks is the subnet address of my LAN. And given the choice of that or unencrypted traffic I'll take that all day long.

74. bruce511 ◴[] No.41209507{6}[source]
The App probably not. The server maybe, the data probably.
75. 8organicbits ◴[] No.41209508{3}[source]
I'll add that anyone using VMs or containers will also run into trust issues too without extra configuration. I've seen lots of contractors resort to just ignoring certificate warnings instead of installing the corporate certs for each client they work with.
76. dijit ◴[] No.41209682{4}[source]
gonna go ahead and cast shade at Google because of how they handled that.

Their original application for .dev was written to "ensure its reserved use for internal projects - since it is a common internal TLD for development" - then once granted a few years later they started selling domains with it.

** WITH HSTS PRELOADING ** ensuring that all those internal dev sites they were aware of would break.

77. qwertox ◴[] No.41209814{4}[source]
I issue a wildcard cert for *.something.example.com.

All subdomains which are meant for public consumption are at the first level, like www.example.com or blog.example.com, and the ones I use internally (or even privately accessible on the internet, like xmpp.something.example.com) are not up for discovery, as no public records exist.

Everything at *.something.example.com, if it is supposed to be privately accessible on the internet, is resolved by a custom DNS server which does not respond to `ANY`-requests and logs every request. You'd need to know which subdomains exist.

something.example.com has an `NS`-record entry with the domain name which points to the IP of that custom DNS server (ns.example.com).

The intranet also has a custom DNS server which then serves the IPs of the subdomains which are only meant for internal consumption.

replies(1): >>41210531 #
78. freedomben ◴[] No.41209950{4}[source]
They are certainly different things, but they're not unrelated. The inability of the user to change the system trust store is part of why certificate pinning is no longer (broadly) recommended.
replies(1): >>41213279 #
79. freedomben ◴[] No.41210024{6}[source]
Indeed. Nothing enrages me more as a user when my browser refuses to load a page and doesn't give me any way to override it.

Whose computer is this? I guess the machine I purchased doesn't belong to me, but instead belongs to the developer of the browser, who has absolutely no idea what I'm trying to do, what my background is and qualifications and what my needs are? It seems absurd to give that person the ultimate say over me on my system, especially if they're going to give me some BS about protecting me from myself for my own good or something like that. Yet, that is clearly the direction things are headed.

80. ivankuz ◴[] No.41210084{3}[source]
A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.

replies(2): >>41210199 #>>41211102 #
81. kortilla ◴[] No.41210156{5}[source]
It’s not theatre, it’s real security. And state level actors are absolutely not the only one capable of man in the middle attacks.

You have:

- employees at ISPs

- employees at the hosting company

- accidental network misconfigurations

- one of your own compromised machines now part of a ransomware group

- the port you thought was “just for internal” that a dev now opens for some quick testing from a dev box

Putting anything in open comms is one of the dumbest things you can do as an engineer. Do your job and clean that shit up.

It’s funny you mention risk analysis, plaintext traffic is one of the easiest things to compromise.

82. nightpool ◴[] No.41210199{4}[source]
Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future
replies(1): >>41210476 #
83. nightpool ◴[] No.41210211{4}[source]
What do you see as the intended reasons with no other solutions?
replies(3): >>41210534 #>>41210948 #>>41211076 #
84. thayne ◴[] No.41210387[source]
My frustration is because using a private CA is more difficult than it should be.

You can't just add the CA to system trust stores on each device, because some applications, notably browsers and java, use their own trust stores, you have to add it to.

You also can't scope the CA to just .internal, which means in a BYOD environment, you have to require your employees to trust you not to sign certs for other domains.

And then there is running the CA itself. Which is more difficult than using let's encrypt.

replies(1): >>41210658 #
85. TeMPOraL ◴[] No.41210412{6}[source]
Cert pinning often annoyingly works against both - software devs are a third party to both the organizational users and their IT dept overlords.

Trusted computing is similar, too. It's a huge win for the user in terms of security, as long as the user owns the master key and can upload their own signatures. If not, then it suddenly becomes a very powerful form of control.

The more fundamental issue is the distinction between "user" and "owner" of a computer - or its component, or a piece of software - as they're often not the same people. Security technologies assert and enforce control of the owner; whether that ends up empowering or abusive depends on who the owners are, and why.

replies(1): >>41213787 #
86. thayne ◴[] No.41210465{6}[source]
No, you can't. Besides the /etc/hosts point mentioned in the sibling, localhost is often hard-coded to use 127.0.0.1 without doing an actual DNS lookup.
87. kbolino ◴[] No.41210474{3}[source]
Isn't certificate pinning on the way out? e.g. https://blog.cloudflare.com/why-certificate-pinning-is-outda...
88. ivankuz ◴[] No.41210476{5}[source]
It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.

replies(1): >>41210829 #
89. 8organicbits ◴[] No.41210486[source]
If you're on a laptop or phone that switches between WiFi networks then you are potentially spilling session cookies and other data unencrypted onto other networks that also happen to resolve .internal. HTTPS encrypts connections, but it also authenticates servers. The later is important too.
90. JonathonW ◴[] No.41210508{6}[source]
> One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).

The secure context spec [1] addresses this-- localhost should only be considered potentially trustworthy if the agent complies with specific name resolution rules to guarantee that it never resolves to anything except the host's loopback interface.

[1] https://w3c.github.io/webappsec-secure-contexts/#localhost

91. brewmarche ◴[] No.41210531{5}[source]
This is the DNS setup I’d have in mind as well.

Regarding the certificates, if you don’t want to set up stuff on clients manually, the only drawback is the use of a wildcard certificate (which when compromised can be used to hijack everything under something.example.com).

An intermediate CA with name constraints (can only sign certificates with names under something.example.com) sounds like a better solution if you deem the wildcard certificate too risky. Not sure which CA can issue it (letsencrypt is probably out) and how well supported it is

replies(1): >>41211498 #
92. 8organicbits ◴[] No.41210534{5}[source]
The biggest benefit of .internal IMO is that it is free to use. Free domains used to be a thing, but after the fall of Freenom you're stuck with free subdomains.
replies(1): >>41211663 #
93. fleminra ◴[] No.41210658{3}[source]
The Name Constraints extension can limit the applicability of a CA cert to certain subdomains or IP addresses.
replies(1): >>41210934 #
94. derefr ◴[] No.41210736[source]
It would be impossible for .internal domains to be publicly CAed, because they're non-unique; the whole point of .internal domains is that, just like private-use IP space, anyone can reuse the same .internal DNS names within their own organization.

X.509 trust just doesn't work if multiple entities can get a cert for the same CN under the same root-of-trust, as then one of the issuees can impersonate the other.

If public issuers would sign .internal certs, then presuming you have access to a random org's intranet, you could MITM any machine in that org by first setting up your own intranet with its own DNS, creating .internal records in it, getting a public issuer to issue certs for those domains, and then using those certs to impersonate the .internal servers in the org-intranet you're trying to attack.

95. ploxiln ◴[] No.41210829{6}[source]
That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.

replies(1): >>41212088 #
96. thayne ◴[] No.41210934{4}[source]
How well supported is that?
replies(2): >>41212204 #>>41215579 #
97. layer8 ◴[] No.41210948{5}[source]
The reasons are explained in https://itp.cdn.icann.org/en/files/security-and-stability-ad....
98. NegativeK ◴[] No.41211076{5}[source]
As a side point, there _needs_ to be something equivalent. People were doing all sorts of bad ideas before, and they had all the problems of .internal as well as the additional problems the hacks were causing -- like using .dev and then dealing with the fallout when the TLD was registered.
99. therein ◴[] No.41211102{4}[source]
There is definitely that. There is also some sort of strange bug with Chromium based browsers where you can get a tab to entirely fail making a certain connection. It will not even realize it is not connecting properly. That tab will be broken for that website until you close that tab and open a new one to navigate to that page.

If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.

I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.

I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.

100. haradion ◴[] No.41211234{8}[source]
OpenSSL does provide a callback mechanism to allow for key logging, but the application does have to opt in. IIRC, at least Curl does support it by default.
replies(1): >>41212039 #
101. qwertox ◴[] No.41211498{6}[source]
I'm "ok" with that risk. It's less risky than other solutions, and there's also the issue that hijacked.something.example.com needs to be resolved by the internal DNS server.

All of this would most likely need to be an inside job with some relatively big criminal energy. At that level you'd probably also have other attack vectors which you could consider.

replies(1): >>41214165 #
102. powersnail ◴[] No.41211663{6}[source]
If `.internal` is for private-use only, they must be resolved by some sort of private or internal DNS. In that case, all domains are free for private-use anyway.
replies(1): >>41212267 #
103. tsimionescu ◴[] No.41212039{9}[source]
Yes, there are ways to do keylogging with OpenSSL. Even if the app doesn't support it, you can do it with LD_PRELOAD and external libraries that call those callbacks. But it's still a whole lot more work than just an env var, and then just not having all these problems in the first place, by avoiding unnecessary encryption. And it probably won't work on mobile.
104. ivankuz ◴[] No.41212088{7}[source]
Thank you very much for such a clear explanation of what's happening. Yeah, I sensed that it's not a limitation of the nginx per-se, as it was asked not to do ssl termination, hence of course it can't extract header from the scrambled bytes. As I needed it to do grpc through asp.net, it is a kestrel requirement to do ssl termination that forced me to use the ssl-passthrough, which probably comes from a whole different can of worms.
replies(1): >>41246325 #
105. 8organicbits ◴[] No.41212204{5}[source]
It's hard to say, but I'm super interested if anyone has statistics. Netflix built https://bettertls.com/ to answer these sorts of questions, but somehow forgot to validate constraints set at the root: https://github.com/Netflix/bettertls/issues/19

Anecdotally, I've seen name constraints kick in for both Firefox and Chrome on a Linux distro, but I can't comment more broadly.

106. 8organicbits ◴[] No.41212267{7}[source]
Unfortunately, that's not true in general. Google proved this with their handling of the .dev TLD. Security settings like the HSTS preload list can impact your internal network if you "squat" on a domain you don't own. Google added all of .dev to the HSTS preload list and now, if you use any domain under that, you browser will force you to use HTTPS.
107. xp84 ◴[] No.41212678{4}[source]
For most purposes, when wishing for non-HTTPS, we are talking about development or maybe a staging server of some sort. Maybe if we had state secrets people would be trying to plug into the lan to snoop the traffic, but for 99.99% of developers the traffic between a testing instance and them is the most worthless thing ever. Worst case you might find out what features we will release to the app in 2 weeks. The conflation of “SSL” with “cybersecurity” is unfortunate.
108. Arch-TK ◴[] No.41213279{5}[source]
Certificate pinning is mainly an obstacle to using an intercepting proxy to inspect and modify the traffic of an application. If you're doing that kind of stuff you already know how to bypass the annoying OS level certificate store restrictions or how to modify an application to disable certificate pinning. The reason certificate pinning is no longer broadly recommended is because of how it makes it more difficult to rotate certificates in the case of necessity, and has nothing to do with the restrictions certain operating systems place on easy installation of your own certificates.
109. wkat4242 ◴[] No.41213787{7}[source]
> The more fundamental issue is the distinction between "user" and "owner" of a computer - or its component, or a piece of software - as they're often not the same people.

Often? Only really in the case of a corporate computer. But Android locks these things down for everyone. In fact corporate owners can do things normal users can't.

For example I've heard (not confirmed) that with a Knox license you can add root CAs on Samsung. I don't think it's still possible with other MDMs or other vendors.

replies(1): >>41214231 #
110. wkat4242 ◴[] No.41213802{5}[source]
True. It's almost never to the benefit of the user. The same with "attestation" technologies.
111. bigstrat2003 ◴[] No.41213871{5}[source]
> The concept of a DMZ died decades ago.

That is very much not true. Most corporate networks I've ever been on trust the internal network. Whether or not you think they should, they do.

112. unethical_ban ◴[] No.41213945{5}[source]
Caring excessively about certain metrics while neglecting real security is harmful.

Encrypting all network traffic between endpoints does nothing to actively harm security.

113. Helmut10001 ◴[] No.41214165{7}[source]
This is also my thinking.. if someone compromises your VM that is responsible for retrieving wildcard certs from let's encrypt, then you're probably busted anyway. Such a machine would usually sit at the center of infrastructure, with limited need to be connected to from other machines.
replies(1): >>41217059 #
114. TeMPOraL ◴[] No.41214231{8}[source]
> Often? Only really in the case of a corporate computer.

On the contrary, that's the more common case. It's the case with any computer at work (unless you're IT dept), in any work - there's hardly a job now that doesn't have one interacting with computers in some form or fashion, and those computers are very much not employee-owned. Same is the case in school setting, and so on. About the only time you can expect to own a computer is when you bought it yourself, with your own cash. The problem is, even when you do, everything is set up these days to deny you your ownership rights.

115. Arch-TK ◴[] No.41214844{5}[source]
Android locking the system certificate store has nothing to do with preventing people from intercepting app traffic for the purpose of inspecting an application and everything to do with preventing people from accidentally installing a malicious certificate which allows part or all their traffic to be MITM-ed.
replies(1): >>41215815 #
116. layer8 ◴[] No.41215579{5}[source]
It's required by RFC 5280 (and predecessor), so it’s fairly well supported.
replies(2): >>41216496 #>>41219473 #
117. thaumasiotes ◴[] No.41215815{6}[source]
Those are literally the same thing.
replies(1): >>41221384 #
118. 8organicbits ◴[] No.41216496{6}[source]
Do you have any references for that? There are lots of RFCs that are weakly adopted or even ignored. When I tested Chrome they didn't support name constraints, but have since added support. I suspect other software is still lagging.
119. brewmarche ◴[] No.41217059{8}[source]
Probably most people would deem the risk negligible, but it’s still worth to mention it, since you should evaluate for yourself. Regarding the central machine: the certificate must not only be generated or fetched (which as you said probably will happen “at the center”) but also deployed to the individual services. If you don’t use a central gateway terminating TLS early the certificate will live on many machines, not just “at the center.”
replies(1): >>41231672 #
120. thayne ◴[] No.41219473{6}[source]
From the issue for support on chrome, it sounds like RFC 5280 requires it for intermediate CAs, but is ambiguous on whether it is required for root CAs (which in this case, is where you want it). So chrome didn't support it on root CAs until recently, at least on Linux.

Although, ideally, it would be possible to limit the scope of a CA when adding it to the trust store, and not have to rely on the creator of the CA setting the right parameters.

121. lysace ◴[] No.41219942{4}[source]
This "NSA puppet" is all for encrypting traffic between networks.

;-)

122. Arch-TK ◴[] No.41221384{7}[source]
No, there are legitimate reasons to install a certificate to intercept traffic as an owner of a device. But the same tools can be abused by malware and by malicious actors to intercept traffic. Its the same in a strictly technical sense but not the same in the intent sense. The intent is to prevent malicious abuse of the feature, not justified non-malicious use. It helps make it harder to intercept application traffic but this is not the intent of the restriction, merely an unintended consequence.
123. Helmut10001 ◴[] No.41231672{9}[source]
You are absolutely right. And deployment can be set up to open up additional vulnerabilities and holes. But there are also many ways to make the deployment quite robust (e.g. upload via push to a deploy server, distribute from there). ... and just by chance, I've written a small bash script that helps to distribute SSL certificates from a centrally managed "deploy" server 8) [1].

[1]: https://github.com/Sieboldianus/ssl_get

124. nightpool ◴[] No.41246325{8}[source]
> it is a kestrel requirement to do ssl termination

Couldn't you just pass it x-forwarded-proto like any other web server? or use a different self signed key between nginx and kestrel instead?