←back to thread

427 points JumpCrisscross | 1 comments | | HN request time: 0.26s | source
Show context
mrweasel ◴[] No.41901883[source]
The part that annoys me is that students apparently have no right to be told why the AI flagged their work. For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.

replies(8): >>41902108 #>>41902131 #>>41902463 #>>41902522 #>>41902919 #>>41905044 #>>41905842 #>>41907688 #
sersi ◴[] No.41902131[source]
It's a similar problem to people being banned from Google (insert big company name) because of an automated fraud detection system that doesn't give any reason behind the ban.

I also thing that there should be laws requiring a clear explanation whenever that happens.

replies(2): >>41902219 #>>41902596 #
razakel ◴[] No.41902219[source]
What about tipping off? Banks can't tell you that they've closed your account because of fraud or money laundering.
replies(2): >>41902939 #>>41904613 #
1. acdha ◴[] No.41904613[source]
That doesn’t seem like a good comparison: it’s a far more serious crime, and while the bank won’t tell that they’re reporting your activity to the authorities the legal process absolutely will and in sensible countries you’re required to be given the opportunity to challenge the evidence.

The problem being discussed here feels like it should be similar in that last regard: any time an automated system is making a serious decision they should be required to have an explanation and review process. If they don’t have sufficient evidence to back up the claim, they need to collect that evidence before making further accusations.