Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
"Explain it like I'm a lowly web dev"
Key extraction is difficult but not impossible.
Refer to the never-ending clown show that is Intels SGX enclave for examples of this.
https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
https://security.apple.com/documentation/private-cloud-compu...