←back to thread

427 points JumpCrisscross | 1 comments | | HN request time: 0.204s | source
Show context
mrweasel ◴[] No.41901883[source]
The part that annoys me is that students apparently have no right to be told why the AI flagged their work. For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.

replies(8): >>41902108 #>>41902131 #>>41902463 #>>41902522 #>>41902919 #>>41905044 #>>41905842 #>>41907688 #
smartmic ◴[] No.41902522[source]
I agree with you, but I would go further and turn the tables. An AI should simply not be allowed to evaluate people, in any context whatsoever. For the simple reason that it has been proven not to work (and will also never).

Anyone interested to learn more about it, I recommend the recent book "AI Snake Oil" from Arvind Narayanan and Sayash Kapoor [1]. It is a critical but nuanced book and helps to see the whole AI hype a little more clearly.

[1] https://press.princeton.edu/books/hardcover/9780691249131/ai....

replies(2): >>41902634 #>>41903001 #
fullstackchris ◴[] No.41902634[source]
I'm definitely no AI hypster, but saying anything will "never" work over an infinite timeline is a big statement... do you have grounds why some sort of AI system could one day "never" work at evaluating some metric about someone? Seems we have reliable systems already doing that in some areas (facial recognition at airport boarding, for example)
replies(2): >>41902909 #>>41908929 #
smartmic ◴[] No.41902909[source]
Okay, let me try to be more precise. By "evaluate", I mean using an AI to make predictions about human behavior, either retrospectively (as is the case here in trying to make an accusation of cheating) or prospectively (i.e. automating criminal justice). Even if you could collect all the parameters (features?) that make up a human being, there is the randomness in humans and in nature in general, which simply destroys any ultimate prediction machine. Not to mention the edge cases we wander into. You can try to measure and average a human being, and you will get a certain accuracy well above 50%, but you will never cross the threshold of such high accuracy that a human being should be measured against, especially in life-deciding questions like career decisions or any social matters.

Reliable systems in some areas? - Absolutely, and yes, even facial recognition. I agree, it works very well, but that is a different issue as it does not reveal or try to guess anything about the inner person. There are other problems that arise from the fact that it works so well (surveillance, etc.), but I did not mean that part of the equation.

replies(1): >>41903126 #
_heimdall ◴[] No.41903126[source]
This feels like an argument bigger than AI evaluations. All points you raised could very well be issues with humans evaluating other humans to attempt to predict future outcomes.
replies(1): >>41904190 #
smartmic ◴[] No.41904190[source]
They are not wrong. And the art of predicting future outcomes proves to be difficult and fraught with failure. But human evaluation of other humans is more like an open level field to me. A human is accountable for what he or she says or predicts about others, subject to interrogation or social or legal consequences. Not so easy with AI, because it steps out of all these areas - at least many actors using AI do not seem to stay responsible and take on all these mistakes.
replies(1): >>41904505 #
1. _heimdall ◴[] No.41904505[source]
In my experience, we're really bad at holding humans accountable for their predictions too. That may even be a good thing, but I'm less confident that we would be holding LLMs less accountable for their predictions than humans.