In either case, we need to change our standards around mastery of subject matter.
In either case, we need to change our standards around mastery of subject matter.
Judges and police officers arent 100% accurate too
For a supposedly educated and thinking person like a professor, if they don't understand "AI" and can't reason that it can most certainly be wrong, they just shouldn't be allowed to use it.
Threatening someone like the people in the article with consequences if they're flagged again, after false flags already, is barbaric; clearly the tool is discriminating against their writing style, and other false flags are probably likely for that person.
I can't imagine what a programming-heavy course would be like these days; I was once accused alongside colleagues of mine (people I'd never spoken to in my life) of plagiarism, at university, because our code assignments were being scanned by something (before AI), and they found some double-digit percentage similarity, but there's only so many ways to achieve the simple tasks they were setting; I'm not surprised a handful out of a hundred code-projects solving the same problem looked similar.
My comment from a few days ago.
The origin was a conversation with a girl who said she'd been pulled into a professor's office and told she was going to be reported to whatever her university's equivalent of Student Conduct and Academic Integrity is over using AI - a matter of academic honesty.
The professor made it clear in the syllabus that "no AI" was allowed to be used, spent the first few days of class repeating it, and yet, this student had been assessed by software to have used it to write a paper.
She had used Grammarly, not ChatGPT, she contended. They were her words and ideas, reshaped, not the sole product of a large language model.
In a world where style suggestion services are built into everything from email to keyboards, what constitutes our own words? Why have ghostwritten novels topped the NYT Best Sellers for decades while we rejected the fitness of a young presidential hopeful over a plagiarized speech?
Integrity doesn't exist without honesty. Ghostwriting is when one person shapes another person's truth into something coherent and gives them credit. A plagiarized speech is when someone takes another person's truth as their own, falsely. What lines define that in tools to combat the latter from the former, and how do we communicate and enforce what is and isn't appropriate?
Institutions have to enforce rules around these things, if they do not within 10 years their degrees will be pointless.
It's what happens when you believe someone to have cheated that matters. If it's not blatant cheating, then you cannot punish them for it. These tools exist to catch only the worst offenders.
Writing essays isn't just about your ideas. It's also a tool to teach communication skills. The goal of an essay isn't to produce a readable paper, until you start your PhD at least; it's to teach a variety of skills.
I don't really care about the AI generated spam that fills the corporate world because corporate reports are write-only anyway, but you can't apply what may be tolerated in the professional world to the world of education.
People keep using these "gotcha" examples and never actually look at the stats for it. I get it, there are some terrible detectors out there, and of course they are the free ones :)
https://edintegrity.biomedcentral.com/articles/10.1007/s4097...
GPTZero was correct in most scenarios where they used basic prompts, and only had one false positive.
We did a comparison of hand reviewed 3,000 9-12th grade assignments and found that GPTZero holds up really well.
In the same way that plagiarism detectors need a process for review, your educational institution needs the same for AI detection. Students shouldn't be immediately punished, but instead it should be reviewed, and then an appropriate decision made by a person.
It's still not cheating. English assignments aren't about the practice of writing English, you stop doing that in primary school. It's analysis of English texts in which people have been using spelling and grammar checkers since their inception. It's not even cheating to have someone proofread and edit your paper, it's usually encouraged, and Grammarly is just a worse-than-human editor.
> GPTZero was correct in most scenarios where they used basic prompts, and only had one false positive.
One false positive out of only "five human-written samples", unless I'm misreading.
Say 50 papers are checked, with 5 being generated by AI. By the rates of GPTZero in the paper, 3 AI-generated papers would be correctly flagged and 9 human-written papers would incorrectly flagged. Meaning a flagged paper is only 25% likely to actually be AI-generated.
Realistically the sample size in the paper is just far too small to make any real conclusion one way or another, but I think people fail to appreciate the difference between false positive rate and false discovery rate.
Saying "AI" becomes meaningless if we're all using it to mean different things. If I use computer vision to perform cell counts, or if an ESL student uses deepl to help translate a difficult to express idea, would we be in breach of student conduct?
The real answer is "ask your professor first", but with how second nature many of these tools have become in P12 education, it may not occur to students that it might be necessary to ask.
But I think it's actually not all that different, particularly in the context of "essays teach writing." It used to be human work to analyze sentences for passive voice, remember the difference between there/their/they're, and understand how commas work, but now the computer handles it.
(Relevant sidenote: Am I using commas correctly here? IDK! I've never fully internalized the rules!)