←back to thread

Let's talk about AI and end-to-end encryption

(blog.cryptographyengineering.com)
269 points chmaynard | 2 comments | | HN request time: 0.426s | source
Show context
klik99 ◴[] No.42744066[source]
> You might even convince yourself that these questions are “privacy preserving,” since no human police officer would ever rummage through your papers, and law enforcement would only learn the answer if you were (probably) doing something illegal.

Something I've started to see happen but never mentioned is the effect automated detection has on systems: As detection becomes more automated (previously authored algorithms, now with large AI models), there's less cash available for individual case workers, and more trust at the managerial level on automatic detection. This leads to false positives turning into major frustrations since it's hard to get in touch with a person to resolve the issue. When dealing with businesses it's frustrating, but as these get more used in law enforcement, this could be life ruining.

For instance - I got flagged as illegal reviews on Amazon years ago and spent months trying to make my case to a human. Every year or so I try to raise the issue again to leave reviews, but it gets nowhere. Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

More automatic detection can work (and honestly, it's inevitable) but it's got to acknowledge that false positives will happen and allocate enough people to resolve those issues. As it stands right now, these detection systems get built and immediately human case workers get laid off, there's this assumption that detection systems REPLACE humans, but it should be that they augment and focus human case workers so you can do more with less - the human aspect needs to be included in the budgeting.

But the incentives aren't there, and the people making the decisions aren't the ones working the actual cases so they aren't confronted with the problem. For them, the question is why save $1m when you could save $2m? With large AI models making it easier and more effective to build automated detection I expect this problem to get significantly worse over the next years.

replies(8): >>42744353 #>>42744806 #>>42745195 #>>42745270 #>>42746590 #>>42747578 #>>42747693 #>>42756231 #
smallmancontrov ◴[] No.42744353[source]
The UK Post Office scandal is bone-chilling.

Update this to a world where every corner of your life is controlled by a platform monopoly that doesn't even provide the most bare-bones customer service and yeah, this is going to get a lot worse before it gets better.

replies(2): >>42746452 #>>42746621 #
1. Vampiero ◴[] No.42746452[source]
And that's the early game.

Imagine when AI will be monitoring all internet traffic and arresting people for thoughtcrime.

What wasn't feasible to do before is now quite in reach and the consequences are dire.

Though of course it won't happen overnight. First they will let AI encroach every available space (backed by enthusiastic techbros). THEN, once it's established, boom. Authoritarian police state dystopia times 1000.

And it's not like they need evidence to bin you. They just need inference. People who share your psychological profile will act and speak and behave in a similar way to you, so you can be put in the same category. When enough people in that category are tagged as criminals, you will be too.

All because you couldn't be arsed to write some boilerplate

replies(1): >>42748788 #
2. shakna ◴[] No.42748788[source]
It's already arresting the wrong people [0].

[0] https://www.theregister.com/2023/08/08/facial_recognition_de...