←back to thread

427 points JumpCrisscross | 1 comments | | HN request time: 0s | source
Show context
mrweasel ◴[] No.41901883[source]
The part that annoys me is that students apparently have no right to be told why the AI flagged their work. For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.

replies(8): >>41902108 #>>41902131 #>>41902463 #>>41902522 #>>41902919 #>>41905044 #>>41905842 #>>41907688 #
ben_w ◴[] No.41902108[source]
> For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

This is a big part of GDPR.

replies(3): >>41902128 #>>41902309 #>>41903067 #
ckastner ◴[] No.41902128[source]
Indeed. Quoting article 22 [1]:

> The data subject shall have the right not to be subject to a decision based solely on automated processing [...]

[1]: https://gdpr.eu/article-22-automated-individual-decision-mak...

replies(1): >>41906475 #
auggierose ◴[] No.41906475[source]
So if an automated decision happens, and the reviewer looks for a second at it, and says, good enough, that will be OK according to GDPR. Don't see what GDPR solves here.
replies(2): >>41907059 #>>41908555 #
1. ckastner ◴[] No.41908555[source]
> So if an automated decision happens, and the reviewer looks for a second at it, and says, good enough, that will be OK according to GDPR. Don't see what GDPR solves here.

The assumption is that a human review the conditions that led the automated system to make that decision.

I think it would be trivial to argue in court that rubberstamping some scalar value that a deep neural net or whatever spit out does not pass that bar. It's still the automated system's decision, the human is just parroting it.

Note that it's easier for the FAANGs to argue such a review has happened because they have massive amounts of heterogenous data where there's bound to be something that would be sufficient to argue with (like having posted something that offended someone).

But a single score? I'd say almost impossible to argue. One would have to demonstrate that the system is near-perfect, and virtually never makes mistakes.