←back to thread

Let's talk about AI and end-to-end encryption

(blog.cryptographyengineering.com)
269 points chmaynard | 2 comments | | HN request time: 0.424s | source
Show context
klik99 ◴[] No.42744066[source]
> You might even convince yourself that these questions are “privacy preserving,” since no human police officer would ever rummage through your papers, and law enforcement would only learn the answer if you were (probably) doing something illegal.

Something I've started to see happen but never mentioned is the effect automated detection has on systems: As detection becomes more automated (previously authored algorithms, now with large AI models), there's less cash available for individual case workers, and more trust at the managerial level on automatic detection. This leads to false positives turning into major frustrations since it's hard to get in touch with a person to resolve the issue. When dealing with businesses it's frustrating, but as these get more used in law enforcement, this could be life ruining.

For instance - I got flagged as illegal reviews on Amazon years ago and spent months trying to make my case to a human. Every year or so I try to raise the issue again to leave reviews, but it gets nowhere. Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

More automatic detection can work (and honestly, it's inevitable) but it's got to acknowledge that false positives will happen and allocate enough people to resolve those issues. As it stands right now, these detection systems get built and immediately human case workers get laid off, there's this assumption that detection systems REPLACE humans, but it should be that they augment and focus human case workers so you can do more with less - the human aspect needs to be included in the budgeting.

But the incentives aren't there, and the people making the decisions aren't the ones working the actual cases so they aren't confronted with the problem. For them, the question is why save $1m when you could save $2m? With large AI models making it easier and more effective to build automated detection I expect this problem to get significantly worse over the next years.

replies(8): >>42744353 #>>42744806 #>>42745195 #>>42745270 #>>42746590 #>>42747578 #>>42747693 #>>42756231 #
drysine ◴[] No.42746590[source]
>Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

It can be much scarier.

There was a case in Russia when a scientist was accused in a murder that happened 20 years ago based on 70% face recognition match and fake identification as an accomplice by a criminal. [0] He spent 10 months in jail during "investigation" despite being incredibly lucky to have an alibi -- archival records of the institute where he worked, proving he was in an expedition far away from Moscow at that time. He was eventually freed but I'm afraid that police investigators that used very weak face recognition match as a way to improve their work performance stats are still working in the police.

[0] https://lenta.ru/articles/2024/04/03/scientist/

replies(4): >>42746952 #>>42747250 #>>42748018 #>>42751683 #
bflesch[dead post] ◴[] No.42748018[source]
[flagged]
fwn ◴[] No.42748194[source]
> [...] my conclusion is that you're here to spill russian propaganda. [...]

The case described by the parent is that of someone who was wrongly imprisoned for 10 months on the basis of bogus application of faulty technology, even though they had a solid alibi. Therefore, the comment does not reflect well on Russia, the Russian state or the Russian government, like.. at all.

If there is a propaganda dimension to this (which I doubt), it is certainly not an attempt to say something nice about the Russian justice system.

replies(1): >>42748532 #
1. bflesch ◴[] No.42748532[source]
It's a subtle form of propaganda. Same category as the "funny russian car crashes" or "awesome chinese acrobat" videos that are on reddit's front page all the time. One might wonder why it's always those two countries and not others who are getting thousands of upvotes.

The comment I criticized falsely implies that there is due process in russia, and that technical faults lead to unfair results for the people who are accused of something.

It is a cherry-picked example, and the big majority of russian court cases are decided without due process, because it is a dictatorship. If you try to get justice because you were harmed by corrupt officials or the tzar you're out of luck. Lawyers are getting shot on the street as a birthday present for putin. There are lots of examples. And once you're in prison they'll send you to the frontlines to murder Ukrainians.

replies(1): >>42748948 #
2. fwn ◴[] No.42748948[source]
I'm pretty relaxed about all this, but just a thought: Your comments in this thread seem very eager to talk about Russia instead of the actual topic of the thread, which is privacy and AI.

You wrote those comments in a very repetitive and mission-driven way. Which does not inspire confidence in the absence of ulterior motives.