←back to thread

427 points JumpCrisscross | 1 comments | | HN request time: 0s | source
Show context
mrweasel ◴[] No.41901883[source]
The part that annoys me is that students apparently have no right to be told why the AI flagged their work. For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.

replies(8): >>41902108 #>>41902131 #>>41902463 #>>41902522 #>>41902919 #>>41905044 #>>41905842 #>>41907688 #
smartmic ◴[] No.41902522[source]
I agree with you, but I would go further and turn the tables. An AI should simply not be allowed to evaluate people, in any context whatsoever. For the simple reason that it has been proven not to work (and will also never).

Anyone interested to learn more about it, I recommend the recent book "AI Snake Oil" from Arvind Narayanan and Sayash Kapoor [1]. It is a critical but nuanced book and helps to see the whole AI hype a little more clearly.

[1] https://press.princeton.edu/books/hardcover/9780691249131/ai....

replies(2): >>41902634 #>>41903001 #
raincole ◴[] No.41903001[source]
Statistical models (which "AI" is) have been used to evaluate people's outputs since forever.

Examples: Spam detection, copyrighted material detection, etc.

replies(1): >>41903876 #
freilanzer ◴[] No.41903876[source]
But not in cheating or grades, etc. Spam filters are completely different from this.
replies(2): >>41904125 #>>41905424 #
1. baby_souffle ◴[] No.41904125{3}[source]
> But not in cheating or grades, etc. Spam filters are completely different from this.

Really? A spammer is trying to ace a test where my attention is the prize. I don't really see a huge difference between a student/diploma and a spammer/my attention.

Education tech companies have been playing with ML and similar tech that is "AI adjacent" for decades. If you went to school in the US any time after computers entered the class room, you probably had some exposure to a machine generated/scored test. That data was used to tailor lessons to pupil interest/goals/state curricula. Good software also gave instructor feedback about where each student/cohort is struggling or not.

LLMs are just an evolution of tech that's been pretty well integrated into academic life for a while now. Was anything in academia prepared for this evolution? No. But banning it outright isn't going to work