←back to thread

427 points JumpCrisscross | 1 comments | | HN request time: 0.249s | source
Show context
mrweasel ◴[] No.41901883[source]
The part that annoys me is that students apparently have no right to be told why the AI flagged their work. For any process where an computer is allowed to judge people, where should be a rule in place that demands that the algorithm be able explains EXACTLY why it flagged this person.

Now this would effectively kill off the current AI powered solution, because they have no way of explaining, or even understanding, why a paper may be plagiarized or not, but I'm okay with that.

replies(8): >>41902108 #>>41902131 #>>41902463 #>>41902522 #>>41902919 #>>41905044 #>>41905842 #>>41907688 #
viraptor ◴[] No.41902463[source]
> kill off the current AI powered solution, because they have no way of explaining

That's not correct. Some solution look at perplexity for specific models, some will look at ngram frequencies, and similar approaches. Almost all of those can produce a heatmap of "what looks suspicious". I wouldn't expect any of the detection systems to be like black boxes relying on LLM over the whole text.

replies(2): >>41902900 #>>41904060 #
1. ◴[] No.41904060[source]