People who post things like you did, unprovoked, when nobody is talking about it and it has nothing to do with the post itself is fucking weird and I'm tired of seeing it happening and nobody calling out how fucking weird it is. It happens a lot on posts about icloud or apple photos or ai image generation. Why are you posting about child porn scanning and expressing a negative view of it for no reason. Why is that what you're trying to talk about. Why is it on your mind at all. Why do you feel it's ok to post about shit like that as if you're not being a fucking creep by doing so. Why do you feel emboldened enough to think you can say or imply shit and not catch any shit for it.
Since we're calling people out, allow me to call you out:
Wanting your devices to be private and secure or asking questions about Apple after their f-up doesn't make you a pedo or a pedo sympathiser. Comments that suggest otherwise can also be a bit "sus" (to use your expression), especially in a place like HN where users are expected to know a thing or two about tech and should be aware that the "think of the children" excuse - while good - is sometimes used to introduce technology that is then misused (eg: the internet firewall in the UK that was supposed to protect the children and now blocks sexual education stuff, torrents, etc).
I'll assume your intentions are good, but it isn't right to assume or imply that people complaining about this stuff are pedos.
[0] https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-w...
[1] https://www.reuters.com/article/technology/apple-says-it-is-...
good luck with that
The scanning Apple wanted to do was intrusive, had flaws, and could be abused. That's why you had security researchers, the EFF, etc, speaking out against it. Not long after the announcement, people were sharing "collisions" on Github ( https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue... ) showing that fake positives would be a problem (any alarm bells?)... which then forced Apple to say that there would be a v2 that would fix this (even though they had said that v1 was safe and secure).
On top of ignoring these issues, you seem to be under the impression that the system was only for CSAM detection. It wasn't. The system looked for content... and Apple was going to feed it a CSAM "database" to find that type of content. The problem is that Apple has to follow local rules and many governments have their own database of bad content to block (and report to the authorities)... and Apple usually complies instead of leaving the market. For example, in China, the state has access to encrypted data because Apple gave them the encryption keys per local law. They also ban censorship avoidance apps. For some reason this would be different?
If you want to insist that it was just for CSAM and that people criticising Apple are pedos or are against companies combating CSAM, then do it, but do it with the knowledge that the system wasn't just for CSAM, that it could be tricked (and possibly ruin people's lives), and that it would likely been abused by governments.