←back to thread

427 points JumpCrisscross | 1 comments | | HN request time: 0.001s | source
Show context
fuzzy_biscuit ◴[] No.41903341[source]
If AI detection cannot be 100% accurate, I do not believe it is an appropriate solution for judging the futures of millions of students and young people. Time to move on. Either from the tech or from the essay format.

In either case, we need to change our standards around mastery of subject matter.

replies(4): >>41903517 #>>41903857 #>>41903861 #>>41904189 #
high_na_euv ◴[] No.41903517[source]
AI sucks, but on the other hand

Judges and police officers arent 100% accurate too

replies(2): >>41903683 #>>41903751 #
1. alias_neo ◴[] No.41903683[source]
I'd like to think they'd at least look for some evidence, rather than just ask a crystal ball whether the person is innocent or not.

For a supposedly educated and thinking person like a professor, if they don't understand "AI" and can't reason that it can most certainly be wrong, they just shouldn't be allowed to use it.

Threatening someone like the people in the article with consequences if they're flagged again, after false flags already, is barbaric; clearly the tool is discriminating against their writing style, and other false flags are probably likely for that person.

I can't imagine what a programming-heavy course would be like these days; I was once accused alongside colleagues of mine (people I'd never spoken to in my life) of plagiarism, at university, because our code assignments were being scanned by something (before AI), and they found some double-digit percentage similarity, but there's only so many ways to achieve the simple tasks they were setting; I'm not surprised a handful out of a hundred code-projects solving the same problem looked similar.