←back to thread

511 points moonsword | 1 comments | | HN request time: 0s | source
Show context
thrdbndndn ◴[] No.42168908[source]
Two questions:

1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?

2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?

Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).

replies(7): >>42168981 #>>42169169 #>>42169203 #>>42169266 #>>42169304 #>>42170569 #>>42171458 #
1. oneplane ◴[] No.42169203[source]
It is very different as the cryptography systems can only assure a secure state with a known root of trust path to the state it is in.

The big issue with most platforms out there (x86, multi-vendor, IBVs etc.) is you can't actually trust what your partners deliver. So the guarantee or delta between what's in your TEE/SGX is a lot messier than when you're apple and you have the SoC, SEP, iBoot stages and kernel all measured and assured to levels only a vertical manufacturer could know.

Most devices/companies/bundles just assume it kinda sucks and give up (TCG Optal, TPM, BitLocker: looking at you!) and make most actual secure methods optional so the bottom line doesn't get hit.

That means (for Android phones) your baseband and application processor, boot rom and boot loader might all be from different vendors with different levels of quality and maturity, and for most product lifecycles and brand reputation/trust/confidence, it mostly just needs to not get breached in the first year it's on the market and look somewhat good on the surface for the remaining 1 to 2 years while it's supported.

Google is of course trying hard to make the ecosystem hardened, secure and maintainable (it has been feasible to get a lot of patches in without having to wait for manufacturers or telcos for extended periods of time), including some standards for FDE and in-AOSP security options, but in almost all retail cases it is ultimately an individual manufacturer of the SoC and of the integrated device to make it actually secure, and most don't since there is not a lot of ROI for them. Even Intel's SGX is somewhat of a clown show... Samsung does try to implement their own for example, I think KNOX is both the brand name for the software side as well as the hardware side, but I don't remember if that was strictly Exynos-only. The supply chain for UEFI Secure Boot has similar problems, especially with the PKI and rather large supply chain attack surface. But even if that wasn't such an issue, we still get "TEST BIOS DO NOT USE" firmware on production mainboards in retail. Security (and cryptography) is hard.

As for what the difference is in BFU/AFU etc. imagine it like: essentially some cryptographic material is no longer available to the live OS. Instead of hoping it gets cleared from all memory, it is a lot safer to assume it might be messed with by an attacker and drop all keys and reboot the device to a known disabled state. That way, without a user present, the SEP will not decrypt anything (and it would take a SEPROM exploit to start breaking in to the thing - nothing the OS could do about it, nor someone attacking the OS).

There is a compartmentalisation where some keys and keybags are dropped when locked, hard locked and BFU locked, the main differences between all of them is the amount of stuff that is still operational. It would suck if your phone would stop working as soon as you lock it (no more notifications, background tasks like email, messaging, no more music etc).

On the other hand, it might fine if everything that was running at the time of the lock-to-lockscreen keeps running, but no new crypto is allowed during the locked period. That means everything keeps working, but if an attacker were to try to access the container of an app that isn't open it wouldn't work, not because of some permissions, but because the keys aren't available and the means to get the keys is cryptographically locked.

That is where the main difference lies with more modern security, keys (or mostly, KEKs - key encryption keys) are a pretty strong guarantee that someone can only perform some action if they have the keys to do it. There are no permissions to bypass, no logic bugs to exploit, no 'service mode' that bypasses security. The bugs that remain would all be HSM-type bugs, but SEP edition (if that makes sense).

Apple has some sort of flowchart to see what possible states a device and the cryptographic systems can be in, and how the assurance for those states work. I don't have it bookmarked but IIRC it was presented at Black Hat a year or so ago, and it is published in the platform security guide.