Honestly I think this is a disingenuous defense. It's not insane to look at a closed-source project that is being partially-audited by cherrypicked organizations and say "that's not a very secure or trustworthy process". There is no reasonable accountability being offered to the community. It's like Ford selecting private safety inspectors to tell customers how great their safety is while conveniently leaving out any of the results from their federally-mandated crash tests. Is this
really helping customers, or is it just blatant and masturbatory marketing?
Apple has worked to deceive the public before, in both small and large ways. They lied about backdooring notifications for the US government when they were asked to[0], so it's not too hard to imagine it happening anywhere else in their systems. They're not taking a traditional approach to software transparency which is suspicious, and their "threat model" has professedly not protected against motivated requests for identifying information[1].
When the Mechanical Turk attempted to fool commoners watching it work, it was imperative to hide every trace of the human inside. The candle used to see inside the machine was masked by smoke from candles placed around the room, the cabinet was locked to avoid accidental opening, and people were told not to touch it because it was apparently 'expensive and fragile'. Looks like Apple is the ringleader this time around.
> but just saying “well maybe they’re still doing some unspecified nefarious thing” is not contributing to the discussion.
But Apple is saying the opposite, "well, maybe we're doing the detailed secure thing, ask these people we hired", and you're praising them for it. If calling out objective and obvious logical fallacies isn't contribution, then how are we supposed to argue inside the Reality Distortion Field? Do we make-believe and assume that Apple's provided preconditions are true, or can we express concerns for the outstanding issues? I don't understand how these real-world flaws are somehow unjustified in conversation. You're allowed to hold Apple to adversarial levels of scrutiny if you take security seriously.
> Especially when none of the alternatives are even trying.
Apple is the largest company in the world and by many metrics (and points of comparison) isn't even doing the bare minimum in managing public trust. Whenever you are shown a whitepaper without the means to validate the contents yourself, you are being fed what is called "marketing" in the tech circles. You don't have to feel bad about being tricked though, it's the same thing that fools investors and overly-faithful engineers. Whitepapers are whitepapers, handpicked security audits are handpicked security audits, and code is code. There is no blurring of the lines.
[0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
[1] https://www.apple.com/legal/transparency/us.html