←back to thread

427 points JumpCrisscross | 8 comments | | HN request time: 0.321s | source | bottom
Show context
mrweasel ◴[] No.41901883[source]
The part that annoys me is that students apparently have no right to be told why the AI flagged their work. For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.

replies(8): >>41902108 #>>41902131 #>>41902463 #>>41902522 #>>41902919 #>>41905044 #>>41905842 #>>41907688 #
1. ben_w ◴[] No.41902108[source]
> For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

This is a big part of GDPR.

replies(3): >>41902128 #>>41902309 #>>41903067 #
2. ckastner ◴[] No.41902128[source]
Indeed. Quoting article 22 [1]:

> The data subject shall have the right not to be subject to a decision based solely on automated processing [...]

[1]: https://gdpr.eu/article-22-automated-individual-decision-mak...

replies(1): >>41906475 #
3. mrweasel ◴[] No.41902309[source]
I did not know that. Thank you.

Reading the rules quickly, it does seem like you're not entitled to know why the computer flagged you, only that you have the right to "obtain human intervention". That seems a little to soft, I'd like to know under which rules exactly I'm being judged.

4. 2rsf ◴[] No.41903067[source]
And not less importantly the still young EU AI Act
5. auggierose ◴[] No.41906475[source]
So if an automated decision happens, and the reviewer looks for a second at it, and says, good enough, that will be OK according to GDPR. Don't see what GDPR solves here.
replies(2): >>41907059 #>>41908555 #
6. lucianbr ◴[] No.41907059{3}[source]
Well I guess the theory is that you could go to court, and the court would be reasonable and say "this 1 second look does not fulfill the requirement, you need to actually use human judgement and see what was going on there". Lots of discussions regarding FAANG malicious compliance have shown this is how high courts work in EU. When there is political will.

But if you're a nobody, and can't afford to go to court against Deutsche Bank for example, of course you're SOL. EU has some good parts, but it's still a human government.

It's especially problematic since a good chunk of those "flagged" are actually doing something nefarious, and both courts and government will consider that "mostly works" is a good outcome. One or ten unlucky citizens are just the way the world works, as long as it's not someone with money or power or fame.

replies(1): >>41907458 #
7. auggierose ◴[] No.41907458{4}[source]
I don't see that even people with money and power can do anything here. It is like VAR. When has it ever happened that the referee goes to the screen, and does not follow the VAR recommendation? Never. That is how automated decision making will work as well, across the board.
8. ckastner ◴[] No.41908555{3}[source]
> So if an automated decision happens, and the reviewer looks for a second at it, and says, good enough, that will be OK according to GDPR. Don't see what GDPR solves here.

The assumption is that a human review the conditions that led the automated system to make that decision.

I think it would be trivial to argue in court that rubberstamping some scalar value that a deep neural net or whatever spit out does not pass that bar. It's still the automated system's decision, the human is just parroting it.

Note that it's easier for the FAANGs to argue such a review has happened because they have massive amounts of heterogenous data where there's bound to be something that would be sufficient to argue with (like having posted something that offended someone).

But a single score? I'd say almost impossible to argue. One would have to demonstrate that the system is near-perfect, and virtually never makes mistakes.