Entering a password on boot isn't even that much work
Entering a password on boot isn't even that much work
It is on fedora. I wabt the latest packages and I want to install them with dnf offline upgrade but now I need to put in password twice once for the updates d again for next boot. If it is a server, I don't want to keep a monitor attached to it just to enter the password. I want the computer to just boot.
There has to be a better way.
Entering a password can be a lot of work if you use a strong password (and if you don't, why bother with a password?). Typos can take a toll too because of all the delays included.
Probably Clevis and Tang, network disk decryption that can only decrypt if most of your servers are online. https://github.com/latchset/clevis https://github.com/latchset/tang
Or network decryption (SSH into initrd). https://github.com/gsauthof/dracut-sshd
Also AFAIK there is no standard way to guess the new PCRs on reboot so you can't pre-update them before rebooting. So you either need to unlock manually or use a network decryption like dracut-sshd.
As soon as a volume is decrypted, initrd will write `volume-key` to PCR 15, so any further executables can no longer access the data stored in the TPM.
Store the root hash of the dm verity formatted rootfs in the PCR. If a malicious partition is presented to initrd, its root hash will not match the trusted one stored in the TPM.
Or if you need a writeable rootfs, use fs verity and store the signature of init into the PCR. The trusted init signature won’t match signature of malicious init.
LUKS for encryption and verity for integrity/verification.
Consciously encrypting with a password implies an understanding of the risk of permanent loss. Leaving it unencrypted implies an understanding of the risk of disclosure. Having your drive silently encrypted feels like the worst of both worlds. "I never encrypted or locked it, what do you mean my data is encrypted and gone forever!?"
I'm using it on my home server that I'm using for self-hosting. This way, if it's stolen, the thief won't be able to easily get to my data. At the same time, I don't have to physically enter the password if my server reboots.
Android Verified Boot extends the System on chip Hardware based secure boot to the kernel and rootfs. Root of trust is fused into the SoC, and second stage bootloaders are signed. Second stage boot loader eg uboot,UEFI/edk2 contains a public key that is used to verify a signed AVB partition. This signed partition contains signed rootfs dm verity metadata and signed hash of the kernel(+initrd). AVB validates kernel hash with expected hash and loads kernel if good. It provides the trusted rootfs verity hash to kernel via cmdline. Then when kernel reads rootfs, the dmverity system will calculate hash and check if matches the expected one. If not, the system reboots and the AVB metadata is flagged to indicate tampering/failure of the rootfs.
edit to add: If the SoC supports hardware based full disk encryption, the filesystem can be encrypted as well, with the key being stored in Androids secure key store. Android though has moved away from FDE in favor of file based encryption.
Using even a weak pin/password will allow you to both "pair" and "secure" assuming the TPM is configured to destroy the key on multiple failed attempts.
You should also add a strong (high entropy) LUKS password to allow data recovery in case the TPM chip is lost or the keys are destroyed.
Note that the bits of the encryption keys are present somewhere in the TPM and could in theory be extracted with an exploit or with scanning probe microscopy perhaps.
Use a randomly generated key. Retrieve it from an USB drive at boot (it does it automagically), which contains everything, giving you full plausible deniability without it. It means literally everything you need to boot up is on the USB drive, and if you so want it, you can use 2 separate USB drives.
This is for computers you have physical access to, of course. You will need to carry the USB disk if it is a laptop, but you choose: you want to enter a password (which by itself gives you no plausible deniability BTW), or you want plausible deniability and/or you don't want to enter a password. And while we are at it, laptops (and even desktops) today have SSD, and encryption and plausible deniability is different for an SSD, but again, you choose. Right tool for the job.
https://wiki.archlinux.org/title/Dm-crypt/Encrypting_an_enti...
It's the same technique grub uses to forward the FDE password to the initramfs after its own initial decryption (to read the kernel and initramfs). This works to reboot remote servers with FDE, without needing a vnc or earlyboot-sshd.
Auto update should be able to include the kernel, initrd and grub cmdline from the running system I have no idea what's holding this back since evidently code already exists somewhere to do exactly that.
This feels like one of those half-security measures that makes it feel like you're safe, but it's mostly marketing, making you believe *this* device can be both safe and easy to use.
So if you use this PCR state machine, the problem is that the step before initrd doesn't require the correct password to move the PCR forward? It accepts any password that decrypts the next stage, which didn't have its integrity verified here.
Seems there are multiple ways of solving this, and adding integrity checks is only one. It could also let the TPM verify the disk decryption password (when it's needed.)
You can use it with Systemd.
https://github.com/tpm2-software/tpm2-tools/blob/master/man/...
nope! the trick the article is describing works even if the kernel and initrd is measured. it uses the same kernel, initrd, and command line.
the reason this trick works is that initrds usually fall back to password unlock if the key from the tpm doesn't work. so the hack replaces the encrypted volume, not the kernel, with a compromised one. that is:
1. (temporarily) replace encrypted volume with our own, encrypted with a known password.
2. boot the device.
3. the automated tpm unlock fails, prompting for a password.
4. type in our password. now we're in, using the original kernel and initrd, but it's our special filesystem, not the one we're trying to decrypt.
5. ask the tpm again for the key. since we're still using the original kernel, initrd, and command line, we should now get the key to unlock the original encrypted volume.
the way to fix this is to somehow also measure encrypted volume itself. the article points to suggestions of deriving a value from the encryption key.
No, that's not an effective mitigation. The signed kernel+initrd would still boot into the impersonated root.
> however it means whenever you update you need to unlock manually. On Redhat-based distros this can be done with PCRs 8 and 9, though IIRC this may change on other distros. > Also AFAIK there is no standard way to guess the new PCRs on reboot so you can't pre-update them before rebooting. So you either need to unlock manually or use a network decryption like dracut-sshd.
With some logic to update the values on kernel updates and re-seal the secret this can be handled transparently. That's the design with sdbootutil in openSUSE (https://en.opensuse.org/Systemd-fde, https://github.com/openSUSE/sdbootutil).
There is precisely zero chance that the relevant IT security goons would allow any kind of remote KVM/LTE connection.
The design intent is basically:
1. The TPM is very sensitive, and errs on the side of not unlocking your disk.
Booting into recovery mode to fix a driver? Reinstalled your distro? Added a MOK so you can install the nvidia drivers? Toggled certain options in your BIOS? The expected-computer-state checksums are wrong, better not unlock the disk as it could be an attack.
2. When this happens, you key in the password instead.
You can't rely on the TPM to verify the manually entered password, as the intent of the manually entered password is to recover when the TPM is in a broken state.
So does-it means you do not setup a password/passphrase for your user account?
I personally replace the firmware certificates (PK, KEK, db, dbx, …) with my own and sign every kernel/initrd update, I also unlock my disks with a passphrase anyways, but I'm on the fence WRT if it's more secure than TPM.
Yes in theory TPM key extraction is feasible (and even easy if it's performed by a chip other than your CPU https://pulsesecurity.co.nz/articles/TPM-sniffing ) but it is harder than filming/watching you type the passphrase or installing a discrete key-logger ?
Typically I use offline upgrade if I mean to poweroff but otherwise I just run `sudo dnf update -y && sudo systemctl reboot` in a terminal if I want a quick update&reboot.
On another laptop I am using silverblue (well bluefin) and the atomic upgrades solve the issue completely.
I recently changed motherboard on my laptop, had the bitlocker key if not I was told I'll have to reinstall Windows all over again.
Even with the key, one must decrypt and re-encrypt.
If you believe that the those SecureBoot private keys were leaked, why not also believe that the linux kernel signing keys were also leaked and that you are downloading a backdoored one.
From their perspective, "Secure Boot" has the word "Secure" right in the name. And they've looked up details about the TPM - Microsoft says the TPM avoids systems being tampered with, and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned devices.
If you don't know the intricacies involved, that stuff all sounds great! So they put a line into the corporate IT policy that TPM use is mandatory.
> 4. type in our password.
In a serious security conscious setup this should be a big red flag to investigate. Any unexpected boot password prompt.
That's not true, the unlock key will be regenerated, but the disk contents will not be re-encrypted, because it's encrypted with another immutable key.
So password (or pin) encrypts passphrase, passphrase encrypts LUKS and goes to TPM, then you need to reverse the process for your init script (request password, decrypt passphrase, exchange with TPM to decrypt LUKS), but it depends on your appetite for planning that out.
Like: Password ---> PKDF ---> PIN
And then Password XOR (Key from TPM) -> LUKS
But i guess this kind of logic is not for a bootscript, but for tools like systemcd-cryptenroll.
Any change the untrusted local staff could make to the server, they could also make to the KVM machine (e.g. turn it into a keylogger).
Now you have the same problem but with a smaller computer.
You cannot turn untrusted systems into trusted systems by adding more untrusted systems.
Of course you cannot unseal the secret from the TPM anymore.
But my work computers requires a pin to boot and a password that only my yubikeys (bind the static password to the long press) knows to login. different policy for different context...
Disclosure: I am a co-author of Mandos.
The fact the initramfs is not signed/verified on any desktop Linux distro means secure boot is completely pointless right now on Linux, and is very dissapointing.
I know Fedora has been musing with shipping prebuilt initrds, but it raises problems with things like Nvidia where you need the driver to be in the initramfs to have a proper boot screen. There's also UKIs that have the kernel + initramfs in the same EFI binary (and thus signed) for booting by secure boot, but they can become too big for the small EFI partition computers ship with.
Disk encryption (AES etc) is symmetric and still only brute-force would work which can be made infeasible with a long enough key.
Also from the diagram it looks like the secret key is stored unencrypted on the server, or do I read it wrong?
Granted, you could disable that, but they have thought of that too, you can only disable it from a recovery OS that is signed the same way. But disabling that doesn't disable it for the recovery OS, so you can't evil maid the recovery OS later to make it appear as if it is still enabled.
It is not. There are other, very real and very important problems with that fact and reasons why it should be fixed, but this is not it. The point of SecureBoot is to protect the firmware from userspace. It works very well for that purpose, as evidenced by the facts that exploits to bypass it have to continuosly be found.
Only insofar as everybody that I’ve asked over the years has failed to find anything wrong with it. But no formal verification has been done.
> In particular, is it safe to replay attacks by actors listening in to the network traffic?
Yes, it is safe, since we make sure to only use TLS with PFS.
> Also from the diagram it looks like the secret key is stored unencrypted on the server, or do I read it wrong?
No, the secret is stored encrypted on the server, encrypted with a key which only the client ever has.
For more information, see the introduction and FAQ: <https://www.recompile.se/mandos/man/intro.8mandos>
Just typing a passphrase at boot seems like a pretty decent compromise. I've done it for years and it's never caused a problem.
[0]: https://0pointer.de/blog/brave-new-trusted-boot-world.html
Then I read about the implementation details[0], and it's a complex bloody mess with an unending chain of brittle steps and edge cases, that are begging for a mistake and get exploited. So here we are.
I'm convinced that "measure the kernel" into "measure the initrd" into "show login screen" is all it should take.
But all the “passwordless” schemes I’ve seen support at least an additional “master key” which you can type in.
So if you’re ok with the security tradeoffs of passwordless tpm, it’s only an added convenience on top of your approach.
I had switched to a new AM4 mobo a few years back and decided to spring for a pluggable TPM chip (since the CPU I have doesn't come with TPM onboard). Plugged it in, set everything up pretty seamlessly in windows, no fuss, no muss, boot drive's encrypted transparently. The lack of a password was a bit jarring at first, but it's a gaming PC, so if things go pear-shaped it's not the end of the world.
Fast forward six months and my PC suddently refuses to boot; turns out the pluggable TPM thing was defective and stopped working (without any warning that got surfaced to me).
It was just my boot drive, and reinstalling windows isn't a huge hassle, but it definitely cemented my mixed feelings about passwordless FDE. Had that been the drive I use for my photo library, or my software projects, or work-related documents (tax slips, employment contracts, whatever), that would've been devastating.
It's actually made me rethink the strategy I use for my laptop's backups, and I think I'm in a better place about that now.
I believe when using TPM with LUKS the TPM just decrypts the master key and that is handed back to the OS and used in software. So the primary key does end up in RAM.
This hash the next link method is always as flawed as the weakest link..
Entering a password on boot is a lot of work, because I need to vpn in, and run a java webstart kvm application (serial over ipmi would work better, but it doesn't work well on the hardware I have).
Encrypted disks is a requirement because I don't trust the facility to wipe disks properly. But I assume I would be able to clear the TPM (if present) when I return the machine. And I could store a recovery key somewhere I think is safe in case of hardware issues (although, last time I had hardware issues, I simply restored from backup to new to me disks)
Couldn't you wipe the disks yourself?
Or are you thinking of cases where the disk breaks, gets replaced, and the removed disk does not get properly destroyed?
I am the author of one of the older guides https://blastrock.github.io/fde-tpm-sb.html .
I was wondering about the solution you propose which seems a bit complicated to me. Here's my idea, please tell me if I'm completely wrong here.
What if I put a file on the root filesystem with some random content (say 32 bytes), let's name it /prehash. I hash this file (sha256, blake2, whatever). Then, in the signed initrd, just after mounting the filesystem, I assert that hash(/prehash) == expected_hash or crash the system otherwise. Do you think it would be enough to fix the issue?
Each decryption is equally valid as long as the key has the same size as the data. What happens, in practice, is that the key is much smaller than the data. Take a look at your filesystem, it should have hundreds or thousands of bytes of fixed information (known plaintext), or an equivalent amount of verifiable information (the filesystem structure has to make sense, and the checksums must match). That is: for a large enough filesystem (where "large enough" is probably on the order of a small floppy disk), decrypting with the wrong key will result in unrecoverable garbage which does not make sense as a filesystem.
To give an illustration: suppose all filesystems have to start with the four bytes "ABCD", and the key has 256 bits (a very common key size). If you choose a key randomly to decrypt a given cyphertext, there's only one chance in 2^32 that the decryption starts with ABCD, and if it doesn't, you know it's the wrong key. Now suppose the next four bytes have to be "EFGH", that means only one in 2^64 keys can decrypt to something which appears to be valid. It's easy to see that, once you add enough fixed bytes (or even bits), only one key, the correct one, will decrypt to something which appears to be valid.
I shut it down every day, so type in the password every day too. Short of a concussion, I'm not going to get locked out.
Ex: the first server I had failed and they ended up replacing it with a different server with similar specs, but the drives werem't moved. In this case, the failure was gradual (resetting by itself) and as part of debugging it, I wiped the drives and installed a new OS, but towards the end, the amount of time between resets was very short, and I wouldn't have had a chance to wipe it if I had started later.
Yes, this isn't great service, but it's personal hosting and it's cheap and I get a whole (very old) machine.
Does the default configuration not somehow tangle a user-entered password to authentication against the TPM?
That's still not perfect (i.e. how do you make PIN/password entry non-keyloggable), but anything else, in particular extending the trusted computing base to the entire kernel and the hardware it runs on and hoping that they will both be bug-free and impossible to impersonate, seems like a bad idea.
The TPM is also in a much better position to properly velocity check PIN/password entries than the OS.
If someone steals the NAS how easily can they get to the data? Assuming volumes are encrypted, but the are automatically mounted on boot?
How to ensure the data is safe in case of theft.