However the alternatives suck as far as I know. I don't want to install my own CA certificate on all the various devices in the home, for instance, and keeping that up to date.
With browsers making self-signing a PITA, what choices do I have?
But having an internal (even ACME API-supporting) CA is no walk in the park either. If you can swallow the trade off and design with publicly-known hostnames, I would highly recommend it.
There’s always some annoying device/software/framework requiring their own little config dance to insert the root cert. Like outbound-proxy configuration, but almost worse.
I don’t even want to imagine what would happen if/when the root key needs to be rotated due to some catastrophic HSM problem.
Does Let's Encrypt support Subject Alt Names on the wildcard certs?
My experience suggests that wildcard certs work, but require a SAN entry for each "real" host because browsers don't trust the CN field anymore. e.g., my *.apps.blah cert doesn't work unless I include all of the things I use it on - homeassistant.apps.blah, nodered.apps.blah, etc.
Do Let's Encrypt certificates have something special that negates this requirement? Or am I completely wrong about the SAN requirement?
Uhm, or you use split horizon DNS? Who in their right mind would leak all their internal DNS names into a public DNS zone?
I write iOS apps, and iOS requires that all internet communications be done with HTTPS.
It is possible to use self-signed certs, but you need to do a bit of work on the software, to validate and approve them. I don't like doing that, as I consider it a potential security vector (you are constantly reading about development code that is compiled into release product, and subsequently leveraged by crooks).
I am working on a full-stack system. I can run the backend on my laptop, but the app won't connect to it, unless I do the self-signed workaround.
It's easier for me to just leave the backend on the hosted server. I hardly ever need to work on that part.
Eh, even in large organisations of expert IT users, the internal CA ends up training users to ignore certificate warnings.
Sure, maybe the certificate is set up right on officially issued laptops - but the moment someone starts a container, or launches a virtual machine, or uses some weird tool with its own certificate store, or has a project that needs a raspberry pi, or the boss gets himself an ipad? They'll start seeing certificate errors.
IMHO the risks created by users learning to ignore warnings are much greater than the risks from some outsider knowing that nexus.example.com exists.
Also ngrok.com works really well if you need to give other people access to your dev environment.
Plus all my services go through Tailscale, so although I am leaking internal hostnames via DNS, all those records point to is 100.* addresses
The kind of hosts I have are OPNSense router, traefik servers, unifi controller etc.
I use Let's Encrypt wildcard certs quite extensively, both in production use at $dayjob and on my home network, and have never encountered anything like this. The only "trick" to wildcard certs is one for .apps.blah won't be valid for apps.blah. The normal way to handle this is request one with SANs .apps.blah and apps.blah.
Similarly, it won't work for sub1.sub2.apps.blah. I don't run setups like this myself but if you need it I'd recommend using a separate *.sub2.apps.blah for that, mainly due to the potential for DNS issues when LE is validating. Same thing with multiple top-level domains. The reason is when renewing if one of N validations fail, your certificate gets re-issued without the failed domain which then means broken SSL. If you have completely separate certificates and validation of one fails the old (working) version stays in place. With normal renewals happening at 30 days before expiry, this means you have 29 days for this to resolve on its own, manually fix, etc, and LE even emails you a few days before expiry if a certificate hasn't been renewed.
So if I want to encrypt traffic to "service1.example.com", "service2.example.com" and "service3.example.com" that all run on server A, I'll make three CNAME records that all point to "server-a.internal", and I'll just resolve "server-a.internal" in my local network. Obviously, anyone can query what "service1.example.com" points to, but they won't figure out anything beyond "server A".
0. implicit reliance on a network internet connection means any loss of ACME to the letsencrypt CA makes renewal of the cert or OCSP problematic. if the internet goes down, so does much of the intranet nonreliant upon it.
1. wildcard certs make setting up an attack on the network easier. you no longer need an issued cert for your malicious service, you just need to find a way to get/use the wildcard. you should know your services and SANs for the certs. these should be periodically audited.
Then you could give each server a different wildcard cert without exposing the full name to the certificate log: exchange.banana.example.com log4j.grapefruit.com
Ugly, but functional.
Alternatively should the certificate transparency log rules be changed to not include the subdomain? Maybe what matters is that you know that a certificate has been issued for a domain, when, and that you have a fingerprint to blacklist or revoke. Knowing which actual subdomain a certificate is for is very convenient, but is it proportionate?
2. If you can't secure a wildcard cert, how does the same problem not apply to a root CA cert, which could also then do things like sign google.com certs that your internal users trust, which feels strictly worse. (I know there are cert extensions that allow restricting certs to a subdomain, but they're not universally supported and still scoped as wide as a wildcard cert).
a root CA cert is stored in a gemalto or other boutique special HSM. it has an overwhelming security framework to protect it (if its ever online.) security officers to reset pins with separate pins, and an attestation framework to access its functions through 2 or more known agents with privileges separated. even the keyboard connected to the device is cryptographically authenticated against the hardware to which it connects.
commonly your root is even offline, unavailable (locked in a vault) and only comes out for new issuing CA's.
What if the app is on the same network as the server?
I've got a Denon A/V receiver that has an HTTP interface and the Denon iOS app is able to talk to it. I've watched this via a packet sniffer and it definitely is using plain HTTP.
Is there a tool that solves (some of) this that I just don't know about?
I've seen big companies do it manually, but it's a full time job, sometimes multiple full time jobs, and the result still has more steady-state problems (e.g. people leaving and certs expiring without notification) than letsencrypt.
I would consider it to be a best practice to keep unique keys as an SOP as it discourages bad behaviors, like keeping private keys accessible on file servers or even mail.
The OS might have one. Each browser might have its own. For a developer, each language they use might need separate configuration to get its libraries to use the certificate.
The article mentions BYOD (bring your own device) but we don't allow personal devices to connect to internal services, so this isn't an issue for us.
You can use something like EasyRSA to set up an internal certificate authority and generate server certificates signed by that certificate authority. I started using plain old OpenSSL for generating certificates, which EasyRSA uses under the hood, but I would have liked to start by using EasyRSA in the first place.
By the way, EasyRSA still isn't that easy, but it's better than using OpenSSL directly.
That's interesting. I wonder why Apple let that go by. I've had apps rejected, because they wouldn't use HTTPS. Maybe it's OK for a local WiFi connection. Even then, Apple has been fairly strict.
That said, I think that there are ways to register for exceptions.
Heck, even with most other certificate issuers I can get a cert in similar ways when controlling DNS.
I wrote about it a few years ago: https://blog.heckel.io/2018/08/05/issuing-lets-encrypt-certi...
This looks really interesting. Thanks! I'll see if I can get away with it.
The result is security of issuance, that is near complete confidence that certificates will only be used for controlled domains (not necessary if you want to MITM of course).
Also, ACME is generally easier and more reliable than other certificate rollover processes I've seen. I'm not sure if there's in-house PKI tools supporting it?
Depends on your organisation size though. Maybe your in-house PKI is fine, but it's not for everyone!
[Note 0] Revocation is of course a mess. Let's Encrypt isn't without fault either, particularly when used internally, since OCSP responders will need to be accessible from client devices.
I understand some startups are a bit more "Go get your own computer". I think if they paid for it, it's still their device, but once you pay for it out of your own cash, yeah, mdm or root certs are a no go.
I don't like distributing wild card certs as you then have a bigger problem if the cert is leaked.
When the cert is host specific you immediately know where the leak comes from and the scope of the leak is restricted.
Let a business pay $100/year for 10 internal hostnames.
Each other machine regularly picks up the current outputs from there via SFTP weekly and restarts what-ever services. I'm not running anything that I need near-perfect availability on ATM, so it is no more complex than that. If wanting to avoid unnecessary service restarts check the for changes and only do that part if needed, and/or use services that can be told top reload certs without a restart.
This does mean I'm using the same key on every host. If you want to be (or are required to be) more paranoid than that then this method won't work for you unmodified and perhaps you want per-name keys and certs instead of a wildcard anyway. For extra carefulness you might even separate the DNS service and certificate store onto different hosts.
Not sure how you'd do it with unifi kit, my hosts are all things I can run shell scripts from cron on running services like nginx, Apache, Zimbra, … that I can configure and restart via script.
[1] “manual” because each host has its own script doing the job, “ish” because once configured I don't need to do anything further myself
You need as much security on your CA as the accounts in your org with the authority to replace them with your provisioning tools.
Random BYO devices I can understand but in your cloud / datacenter it’s so easy just because you control everything.
One challenge to this is some software doesn't use the operating system's CA chain by default. A lot of browsers use their own internal one and ignore what the OS does (by default).
If you're going to run a serious internal network, you'll need the basic things like NTP, DNS, a CA server, and, yes, some kind of MDM to distribute internal CA certificates to your people. The real PITA is when you don't have these in place.
For those curious about this extension, see RFC 5280 § 4.2.1.10:
There are many organisations not large enough to justify this setup, for which Lets Encrypt is clearly safer than a custom root CA.
In other words, if they do this they will be untrusted in browsers. They could offer this service on a secondary untrusted root if they wanted.
The trouble with EasyRSA (and similar tools) is that they make decisions for you and restrict what's possible and how. For example, I would always use name constraints with private roots, for extra security. But you're right about OpenSSL; to use it directly requires a significant time investment to understand enough about PKI.
I tried to address this problem with documentation and templates. Here's a step by step guide for creating a private CA using OpenSSL, including intermediate certificates (enabling the root to be kept offline), revocation, and so on: https://www.feistyduck.com/library/openssl-cookbook/online/c... Every aspect is configurable, and here are the configuration templates: https://github.com/ivanr/bulletproof-tls/tree/master/private...
Doing something like this by hand is a fantastic way to learn more about PKI. I know I enjoyed it very much. It's much easier to handle because you're not starting from scratch.
Others in this thread have mentioned SmallStep's STEP-CA, which comes with ACME support: https://smallstep.com/docs/step-ca/getting-started That's definitely worth considering as well.
EDIT The last time I checked, Google's CA-as-a-service was quite affordable https://cloud.google.com/certificate-authority-service AWS has one too, but there's a high minimum monthly fee. Personally, if the budget allows for it, I would go with multiple roots from both AWS and GCP for redundancy.
Another shell-based ACME client I like is dehyradted. But for sending certs to remote systems from one central area, perhaps the shell-based GetSSL:
> Obtain SSL certificates from the letsencrypt.org ACME server. Suitable for automating the process on remote servers.
* https://github.com/srvrco/getssl
In general, what you may want to do is configure Ansible/Puppet/etc, and have your ACME client drop the new cert in a particular area and have your configuration management system push things out from there.
Firefox was a challenge. But my understanding is that now, on Windows, it will now import enterprise root certificates from the system store automatically.
https://bugzilla.mozilla.org/show_bug.cgi?id=1265113
https://support.mozilla.org/en-US/kb/how-disable-enterprise-...
To address the article a recent related discussion, "Analyzing the public hostnames of Tailscale users" [1], indicates in the title one reason you might not want to use LE for internal hostnames. There was a discussion about intermediate CAs there as well [2] with some more details.
[0]: http://pkiglobe.org/name_constraints.html
[1]https://letsencrypt.org/docs/faq/#what-ip-addresses-does-let...
UPDATE: Apparently there is a DNS based solution that I wasn't aware of.
From there, it’s possible to use HTTPS negotiation.
I built out a PKI practice in a large, well-funded organization - even for us, it is difficult to staff PKI skill sets and commercial solutions are expensive. Some network dude running OpenSSL on his laptop is not a credible thing.
Using a public CA is nice as you may be able to focus more on the processes and mechanics adjacent to PKI. You can pay companies like Digicert to run private CAs as well.
The other risks can be controlled in other ways. For example, we setup a protocol where a security incident would be created if a duplicate private key was detected during scans that hit every endpoint at least daily.
I want to deploy apps that use certs that don't expire. When they should be rotated, I want to do them on my own time. And I want a standard method to automatically replace them when needed, that is not dependent on some cron job firing at the correct time or everything breaks.
Cert expiration is a ticking time bomb blowing up my services just because "security best practice" says an arbitrary, hard expiration time is the best thing. Security is not more important than reliability. For a single external load balancer for a website, we deal with it. But when you have thousands of the little bastards in your backend, it's just ridiculous.
That is, unless you're using some sort of public key pinning, but that's very rare to find today and works only in a custom application or something that supports DNSSEC/DANE.
[1] https://community.letsencrypt.org/t/whitelisting-le-ip-addre... [2] https://community.letsencrypt.org/t/whitelist-hostnames-for-... [3]https://community.letsencrypt.org/t/letsencrypt-ip-addresses...
If somebody gets any access to your local network, there are plenty of ways to enumerate them, and if they can't get access, what's the big deal?
I get that you may want to obfuscate your infrastructure details, but leaking infrastructure details on your server names is quite a red flag. It should really not happen. (Instead, you should care about the many, many ways people can enumerate your infrastructure details without looking at server names.)
Having an internal CA is a lot of work, if you want to do it properly and not just for some testing. It is still rather hard to setup HTTPS properly without resorting to running a lot of infrastructure (DNS/ VPN or some kind of public server), that you wouldn't need otherwise.
There's a company called Venafi that makes a product that lives in this space. It tries to auto-inventory certs in your environment and facilitates automatic certificate creation and provisioning.
From what I hear, it's not perfect (or at least, it wasn't as of a few years ago); yeah, some apps do wonky things with cert stores, so auto-provisioning doesn't always work, but it was pretty reliable for most major flavors of web server. And discovery was hard to tune properly to get good results. But once you have a working inventory, lifecycle management gets easier.
I think it's just one of those things where, if you're at the point where you're doing this, you have to accept that it will be at least one person's full-time job, and if you can't accept that... well, I hope you can accept random outages due to cert expiration.
I should note that I'm a contractor and I always bring my own tools, which includes the computer. That said, I still prefer to use my own device where I can. It's got the tools I use, configured how I like them, and I'm very familiar with all its quirks which means I have less context switching.
I have worked for clients with tighter regulation controls where I was required to use designated devices for certain tasks but that's been pretty much all of it.
I would rather not have to carry 2 computers around just because an organisation can't trust me to use my own computer, despite having hired me for a substantial amount of money to operate their production infrastructure.
I was under the impression that 'golden images' aren't generally encouraged as a Best Practice™ nowadays. The general momentum seems to me to be use a vendor-default install image (partitioning however you want), and then go in with a configuration management system once it's on the network.
Basically: you keep your config 'recipes' up-to-date, not your image(s).
AFAIK the Apple bug was fixed in macOS 10.13.3 from what I can find online. [1]
[1]: https://security.stackexchange.com/questions/95600/are-x-509...
If you could install CAs only for a certain domain (default to the name constraints but actually set in the browser/Os) that would be fine, but installing a CA gives anyone with access to that CA the ability to make pretty much any valid cert, and your potential lack of security raises flags
Would not recommend to anyone that they use publicly-valid letsencrypt certs for internal hostnames, since certificate issuance transparency logs are public and will expose all of the hostnames of your internal infrastructure.
split horizon I agree is risky.
http://blog.dijit.sh/please-stop-advocating-wildcard-certifi...
Every single person that connects to any of your networks (very likely the sandboxed mobile one too) can find that name. Basically no place hides it internally. There is very little difference between disclosing it to thousands of the people that care the most about you and disclosing it to everybody on the world.
Also I found https://bettertls.com publishes details about which TLS features are supported on different platforms over time, and it appears that the latest test in Dec 2021 shows most platforms support name constraints.
With that roadblock evaporated, I think this would be the perfect solution to a lot of organization- and homelab-level certificate woes. I'd really like to hear from a domain expert on how feasible it would be to automate for free public certs, ACME-style.
https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1....
https://wiki.mozilla.org/CA:NameConstraints
Although... I have no idea if browsers/applications/openssl/etc actually verify this - but they should.
(Disclaimer I work at LE)
The former give you known limitations, the latter work fine for a while and you get a great feeling, and then disaster strikes out of the blue.
The same problem plagues IoT solutions and home networking - there are no industry-accepted frameworks to enable encryption on Lan like we do on the real internet. Thrre is no way to know that I connect to my home router or NAS when i type in it's address.
This is an area where we have kind of failed as an industry
It’s just that very specific bit in the middle, where you don’t want to expose the internal hostnames but don’t need top-tier security where having a private CA is worthwhile (assuming outbound internet connectivity to Lets Encrypt is allowed).
EDIT [0] https://community.letsencrypt.org/t/does-lets-encrypt-offer-...
From the article:
> It means your employees aren't constantly fighting browser warnings when trying to submit stuff internally.
If your employees gets a habit of ignoring certificate warnings then you have much bigger problems than leaking internal domain names.
It's mostly because of them that DNS is still not reliable. Well, at least this article isn't against certificate transparency, just about how to avoid it.
You're essentially running a public CA at that point, and that isn't easy.
I would love for this to become as widely supported as wildcards so those who choose to use them could do so easily.
This bit me recently. I have a certificate for homelab.myname.com, and as any public-facing IP address, I get the expected brute force ssh login attempts for users 'root', 'git', 'admin', etc...
But I was terrified (until I remembered about the public cert) to find attempts for users 'homelab' and 'myname' -- which, being my actual name, actually corresponds to a user.
It's obviously my fault for not thinking this through, and it's not a terrible issue, but thinking I was under a targeted attack was quite the scare!
Maybe an IP constraint that restricts certs to only be valid in private IP spaces (10.*, 192.168.1.*, etc)?
It's a reasonable mitigation for certain environments and does leak information that makes structuring attacks easier, but it's certainly not a hard wall of any sort. The main problem for most people is articulating the realistic threat models they are trying to address and because that rarely resolves well assuming the conversation is had at all, there is little rational pushback against "everything and the kitchen sink" approaches based on whatever blog the implementer last read.
Personally I tend to advocate assuming your attacker knows everything about you except specific protected secrets (keys, passphrases, unique physical objects) and working back from there, but that's a lot of effort for organizations where security is rarely anything but a headache for a subset of managers.
You'll see similar opinions about things like port-knocking puzzles and consumer ipv4 NAT, which provide almost zero security benefit but do greatly reduce the incidence of spurious noise in logs.
For example, you can register a certificate for local.yourcompany.com and point local.yourcompany.com to 127.0.0.1 to get HTTPs locally. The same could be done for internal network IPs.
It wouldn't work well with Let's Encrypt because their bot would just end up talking to itself in this scenario.
Of course you could also use my side project (expose.sh) to get a https url in one command.
https://gist.github.com/mojzu/b093d79e73e7aa302dde8e335945b2...
Which covers using step-ca with Caddy to get TLS certs via ACME for subdomains, and protecting internal services using client certificates/mtls
I then install Tailscale on the host which is running the docker containers, and configure the firewall so that only other 100.* IP addresses can connect to ports 80/443/444. The combination of VPN+MTLS mitigates most of my worries about exposing internal subdomains on public DNS
The bigger issue right now is this:
> under current BRs, a name constrained subordinate has to meet all the same requirements an unconstrained subordinate does, which means secured storage and audits
Basically, even a name constrained intermediate CA is subject to all the same regulatory requirements as a trusted root CA. From a regulatory compliance perspective it'd be pretty much equivalent to operating your own globally trusted root CA, with all the auditing and security requirements that go along with that. And if you ever screw up, Let's Encrypt, as the root CA your CA is chained to, would be held responsible for your mistakes as required by the current BRs.
Basically, it's not happening anytime soon without some serious changes to the Baseline Requirements and web PKI infrastructure.
Not w/r/t Chromium.
https://web.archive.org/web/20170611165205if_/https://bugs.c...
https://web.archive.org/web/20171204094735if_/https://bugs.c...
In tests I conducted with Chrome, the CN field could be omitted in self-signed server certs without any problems.
If so, then the decision is more like, whether to use a public or private certificate for an internal service.
That was a big debate in the CA/B Forum when CT was created; the current behavior is a deliberate choice on the part of the browser developers, which they will probably not want to revisit.
But, name constraints are enforced by "relying parties" -- HTTPS/TLS clients & servers that are validating certificates and authenticating remote peers. In practice, there's a risk that a broken/misconfigured relying party would trust a cert for google.com signed by an intermediate that's name constrained / only trusted to issue for `*.example.com`.
But practically I don't see a difference between a name constrained CA with a 90 day life and a wildcard cert with a 90 day life from the perspective of the requirements listed above. There are only benefits, because now you can scope down each service to a cert that is only valid for that service.
Specifically a TXT record for _acme-challenge has to exist for the requested hostname. Or a CNAME of the requested hostname pointing somewhere else that you control:
* https://dan.langille.org/2019/02/01/acme-domain-alias-mode/
* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...
* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...
No A (or AAAA) records needed.
But getting back to your parent post, maybe we can see a nontrivial real-world list of a big network to make sure it’s leaking nothing of value?
You know, i feel like more people wouldn't have a problem with actually doing this if it weren't so challenging and full of sometimes unpleasant CLI commands. To me openssl and similar packages to it feel like comparing the UX of tar vs docker CLIs, where the former is nigh unusable, as humorously explained here: https://xkcd.com/1168/
In comparison, have a look at Keystore Explorer: https://keystore-explorer.org/screenshots.html
Technically you can use it to run a CA, i guess, but in my experience it has mostly been invaluable when dealing with all sorts of Java/other keystores and certificates, as well as doing certain operations with them (e.g. importing a certificate/chain in a keystore, or maybe generating new ones, or even signing CSRs and whatnot).
Sure, you can't automate that easily, but for something that you do rarely (which may or may not fit your circumstances), not struggling with the text interface but rather having a rich graphical interface can be really nice, albeit that's probably a subjective opinion.
Edit: on an unrelated note, why don't we have more software that uses CLI commands internally that correspond to doing things in the GUI, but with the option to copy the CLI commands when necessary (say, the last/next queued command being visibile in a status bar at the bottom)? E.g. hover over a generate certificate button, get a copyable full CLI command in the status bar.
Of course, maybe just using Let's Encrypt (and remembering to use their staging CA for testing) and just grokking DNS-01 is also a good idea, when possible. Or, you know, any other alternatives that one could come up with.
Lots of fun stuff is possible but yeah, it's definitely something you should consider. Let's Encrypt allows wildcart certs from memory so you should probably use one of those per subdomain.
We also have a hosted option[3] with a free tier that should work for individuals, homelabs, pre-production, and even small production environments. We've started building out a management UI there, and it does map to the CLI as you've described :).
[1] https://github.com/smallstep/certificates
In theory, a name-constrained intermediate for `.example.com` has no more authority and poses no greater risk than a wildcard leaf certificate for `.example.com`. In both cases the private key can be used to authenticate as any subdomain of `example.com`.
But, name constraints are verified by relying parties (the clients and servers that are actually authenticating remote peers using certificates). It's hard to be certain that everything has implemented name constraints properly. This is, ostensibly and as far as I know, the reason CA/Browser forum hasn't allowed name constrained intermediates.
At some point it probably makes sense to just pull the bandaid off.
But SANs are just names (that's even what it stands for, "Subject Alternative Name" the word alternative is because this is for X.509 which is part of the X.500 directory system, in which names are part of the X.500 hierarchy, while these names are from the Internet's naming systems DNS and IP addresses which could be seen as an alternative to that hierarchy)
So in changing both the names, and the keys, you're just getting a completely different certificate, maybe the pricing is different for you than purchasing more certificates, but these certificates aren't in any technical sense related to the other certificate.
It's a problem to use nomenclature that's completely wrong in a technical discussion like this. If you call the even numbers "prime" you shouldn't be surprised at the reaction when you claim "half the natural numbers are prime" in a thread about number theory.
[Edited to fix eTLD to eTLD+1 obviously we can't have people issuing wildcards directly inside an eTLD]
Apart from that thankyou so much for what you've done and provided for the opensource community. The smallstep toolkit is truly fantastic.
Wildcards hide it somewhat, but DigiCert charges per subdomain now, and every user thinks they need their own subdomain for some reason. So LE it is.
The mnemonic I use:
x - extract
z - ze
v - vucking
f - files
Running a CA that issues certificates isn't that hard. There are off-the-shelf solutions and wraparounds for openssl as well.
Running an RA is hard. That's the part that has to check who is asking for a certificate and whether they're authorized to get one and what the certificate restrictions etc are.
Then there's the infrastructure issue on the TLS users (clients & servers) that need to have the internally trusted root of the CA installed and need the RA client software to automagically request and install the necessary leaf and chain certificates.
AWS has private CAs for $400/month, but if you want a root and then some signing intermediates, that's $400 for each (effectively the PCA is just a key stored in an AWS HSM and an API for issuing certificates).
A real HSM will cost roughly a year of that service, but the management of that hardware and protecting it and all the rigmarole around it is very expensive.
Every mobile phone and most desktops have a TPM that could be used for this, but having an API to access it in a standard way isn't that available.
zip my-archive.zip my-directory
unzip my-archive.zip
(disclaimer: zip/unzip won't be a reasonable alternative for all of the use cases of tar)Good software doesn't beg that much explanation. And when it does, then either "--help" or just the command with no parameters e.g. "zip" or "unzip" should provide what's necessary. I don't believe that tar does that, but instead overwhelms the user, whereas "tar --usage" is overwhelming.
Here's another comment of mine which serves a precise example of why tar is problematic in my eyes: https://news.ycombinator.com/item?id=29339018
I don't feel like it follows the UNIX philosophy that well either, though i won't argue that it should be much smaller (because it is powerful, although someone might argue that), but that its commands should be grouped better.
That said, maybe things would be more tolerable if we used the full parameters instead of memorizing silly mnemonics, here's an excerpt from the linked comment:
$ tar --verbose --create --gzip --file=new-archive.tar.gz ./files-i-want-to-archive
The infra itself, keeping up with compliance and root program changes (which happen with more frequency now!), CT logging, running revocation services (not easy at scale). Plus then things to consider like rotation of the NC'd CA. You'd have to rotate at least once a year, perhaps less given domain validation validity periods. You'd also likely need to have the chain ('chain' used loosely, we know it's not really a linear chain) be four deep like: root->CA->your NC'd CA->leaf, 'cos the root should be offline and unless you're not doing these in much volume I assume you'd want to automate issuance and not gather your quorum of folk to sign from the offline roots. That might not be an issue for many, but it certainly is for some.
(Full disclosure, I work for a CA for almost 2 decades and have pretty intimate knowledge in this area, sadly).
Then, our standard Ansible playbooks set up on each node a weekly systemd timer which downloads the needed certificates and restarts or reloads the services.
PKI is fairly awful and bad for internal anything, unless you have a full IT team and infrastructure.
A much simpler solution would be URLs with embedded public keys, with an optional discover and pair mechanism.
Browsers already have managed profiles. Just set them up with a trusted set of "paired" servers and labels, push the configs with ansible(It's just like an old school hosts file!), and don't let them pair with anything new.
If you have a small company of people you trust(probably a bad plan!), or are a home user, just use discovery. No more downloading an app to set up a smart device.
The protocol as I sketched it out(and actually prototyped a version of) provides some extra layers of security, you can't connect unless you already know the URL, or discovery is on and you see it on the same LAN.
We could finally stop talking to our routers and printers via plaintext on the LAN and encrypt everywhere.
We already use exactly this kind of scheme to secure our keyboards and mice, with discovery in open air not even requiring being on the same LAN.
We type our sensitive info into Google docs shared with an "anyone with this URL" feature.
It seems we already trust opaque random URLs and pairing quite a bit. So why not trust them more than the terrible plaintext LAN services we use now?
I'm not saying no trusted parties is the end goal (though Tor's onion or the GNU Name System work in this area), but maybe giving dozens of corporations/institutes the power to impersonate your server (from a client UX perspective) isn't the best we can do.
To the extent DNS is an attractive attack vector, DNSSEC doesn't actually do much to mitigate those attacks. Most DNS corruption doesn't come from the on-the-wire cache corruption attacks DNSSEC was designed to address, but from attacks directly on registrars. There's nothing DNSSEC can do to mitigate those attacks, but not having CAs tied directly to DNS does mitigate them: it means the resultant misissued certificates are CT-logged.
If there was a huge difference in security from switching to DANE, this would be a different story. But in practice, the differences are marginal, and sometimes they're in the wrong direction.
Two really big things happened in the last decade that influenced the calculus here:
1. WebPKI certs are now reliably available free of charge, because of LetsEncrypt and the market pressures it created.
2. Chrome and Mozilla were unexpectedly successful at cleaning up the WebPKI, to the extent that some of the largest CAs were summarily executed for misissuance. That's not something people would have predicted in 2008! But WebPKI governance is now on its toes, in a way that DNS governance is unlikely ever to be.
(Cards on the table: I'd be a vocal opponent of DANE even if 1 & 2 weren't the case.)
† Not only is there no CT for DANE, but there's unlikely ever to be any --- CT was rolled out in the WebPKI under threat of delisting from Mozilla and Chrome's root cert programs, and that's not a threat you can make of DNS TLD operators.
Since DNS is public data that anyone can archive, isn't it easy to build a CT log from that for a list of domains? I mean regularly probing for DANE records on your domains can be done fairly easily in a CRON job. I'm personally very skeptical to trust CT logs from CAs in the first place and would much rather welcome a publicly-auditable/reproducible system.
> (Cards on the table: I'd be a vocal opponent of DANE even if 1 & 2 weren't the case.)
Why and what's the alternative? Is your personal recommendation to use a specific CA you trust over all the others and setup CAA records on your domain? Otherwise i believe DNS remains a single point of failure and hijacking it would make it easy to obtain a certificate for your server from pretty much any CA, so i don't see any security benefits, but i do see the downside that any CA can be compelled (by legal or physical threat) to produce a trusted certificate for a certain domain which of course could be said of TLD operators as well, but i believe reducing the number of critical operators your security relies on is always a good thing.
If you have a link to a more detailed read on your thoughts on this topic, i'd be happy to read some lengthier arguments.
You can't replicate that clientside by monitoring domains. A malicious authority server can feed different data selectively.
Could you replicate this system in the DNS? Well, it'd be impossible to do it with DNSSEC writ large (because there's no way to deliver SCTs to DNS clients), but you could do it with extensions (that don't exist) to DANE itself, and tie it into the TLS protocol. But that system would require the cooperation of all the TLD operators, and they have no incentive to comply --- just like the commercial CAs didn't, until Mozilla threatened to remove them from the root certificate program unless they did. But Mozilla can't threaten to remove .COM from the DNS.
So, no, the situations aren't comparable, even if you stipulate that DANE advocates could theoretically design something.
I'm hesitant to answer the second question you pose at length, because you have some misconceptions about how CT works, and so we're not on the same page about the level of transparency that exists today.
Yes, it is. In most cases Confidentially > Integrity > Availability. Systems should fail-safe.
There are some scenarios such as medical devices where integrity or availability trump confidentiality. But most information systems should favor going offline to prevent a breach of confidentiality or data integrity.