←back to thread

Let's talk about AI and end-to-end encryption

(blog.cryptographyengineering.com)
174 points chmaynard | 1 comments | | HN request time: 0s | source
Show context
tonygiorgio ◴[] No.42741055[source]
> Although PCC is currently unique to Apple, we can hope that other privacy-focused services will soon crib the idea.

IMHO, Apple's PCC is a step in the right direction in terms of general AI privacy nightmares where they are at today. It's not a perfect system, since it's not fully transparent and auditable, and I do not like their new opt-out photo scanning feature running on PCC, but there really is a lot to be inspired by it.

My startup is going down this path ourselves, building on top of AWS Nitro and Nvidia Confidential Compute to provide end to end encryption from the AI user to the model running on the enclave side of an H100. It's not very widely known that you can do this with H100s but I really want to see this more in the next few years.

replies(2): >>42741932 #>>42742122 #
blueblimp ◴[] No.42742122[source]
And the most important thing about PCC in my opinion is not the technical aspect (though that's nice) but that Apple views user privacy as something good to be maximized, differing from the view championed by OpenAI and Anthropic (and also adopted by Google and virtually every other major LLM provider by this point) that user interactions must be surveilled for "safety" purposes. The lack of privacy isn't due to a technical limitation--it's intended, and they often brag about it.
replies(2): >>42743482 #>>42745358 #
1. natch ◴[] No.42743482[source]
Something good to be maximized within the constraints of the systems they have to work within. But at some point with enough compromises it becomes maximizing the perception of privacy, not the reality. Promoting these academic techniques may just be perception management on the part of Apple, if the keys are not controlled solely by the user.