Most active commenters

    ←back to thread

    427 points JumpCrisscross | 20 comments | | HN request time: 0.203s | source | bottom
    1. fuzzy_biscuit ◴[] No.41903341[source]
    If AI detection cannot be 100% accurate, I do not believe it is an appropriate solution for judging the futures of millions of students and young people. Time to move on. Either from the tech or from the essay format.

    In either case, we need to change our standards around mastery of subject matter.

    replies(4): >>41903517 #>>41903857 #>>41903861 #>>41904189 #
    2. high_na_euv ◴[] No.41903517[source]
    AI sucks, but on the other hand

    Judges and police officers arent 100% accurate too

    replies(2): >>41903683 #>>41903751 #
    3. alias_neo ◴[] No.41903683[source]
    I'd like to think they'd at least look for some evidence, rather than just ask a crystal ball whether the person is innocent or not.

    For a supposedly educated and thinking person like a professor, if they don't understand "AI" and can't reason that it can most certainly be wrong, they just shouldn't be allowed to use it.

    Threatening someone like the people in the article with consequences if they're flagged again, after false flags already, is barbaric; clearly the tool is discriminating against their writing style, and other false flags are probably likely for that person.

    I can't imagine what a programming-heavy course would be like these days; I was once accused alongside colleagues of mine (people I'd never spoken to in my life) of plagiarism, at university, because our code assignments were being scanned by something (before AI), and they found some double-digit percentage similarity, but there's only so many ways to achieve the simple tasks they were setting; I'm not surprised a handful out of a hundred code-projects solving the same problem looked similar.

    4. spacebanana7 ◴[] No.41903751[source]
    Our judicial processes, at least in theory, have defined processes for appeals and correcting mistakes.
    5. washadjeffmad ◴[] No.41903857[source]
    https://news.ycombinator.com/item?id=41882421

    My comment from a few days ago.

    The origin was a conversation with a girl who said she'd been pulled into a professor's office and told she was going to be reported to whatever her university's equivalent of Student Conduct and Academic Integrity is over using AI - a matter of academic honesty.

    The professor made it clear in the syllabus that "no AI" was allowed to be used, spent the first few days of class repeating it, and yet, this student had been assessed by software to have used it to write a paper.

    She had used Grammarly, not ChatGPT, she contended. They were her words and ideas, reshaped, not the sole product of a large language model.

    In a world where style suggestion services are built into everything from email to keyboards, what constitutes our own words? Why have ghostwritten novels topped the NYT Best Sellers for decades while we rejected the fitness of a young presidential hopeful over a plagiarized speech?

    Integrity doesn't exist without honesty. Ghostwriting is when one person shapes another person's truth into something coherent and gives them credit. A plagiarized speech is when someone takes another person's truth as their own, falsely. What lines define that in tools to combat the latter from the former, and how do we communicate and enforce what is and isn't appropriate?

    replies(1): >>41904223 #
    6. bdzr ◴[] No.41903861[source]
    What solutions are 100% accurate?
    replies(2): >>41904025 #>>41904121 #
    7. tgv ◴[] No.41904025[source]
    Letting everyone pass.
    8. max51 ◴[] No.41904121[source]
    The problem is that AI detection is far closer to 0% than 100%,. It's really bad and the very nature of this tech makes it impossible to be good.
    replies(1): >>41904234 #
    9. bearjaws ◴[] No.41904189[source]
    Plagiarism detectors aren't 100% accurate either, and we have to use those as well.

    Institutions have to enforce rules around these things, if they do not within 10 years their degrees will be pointless.

    It's what happens when you believe someone to have cheated that matters. If it's not blatant cheating, then you cannot punish them for it. These tools exist to catch only the worst offenders.

    replies(2): >>41904534 #>>41905462 #
    10. jeroenhd ◴[] No.41904223[source]
    In my opinion, it strongly depends on what Grammarly is being used for. For a physics paper, that's not a huge problem. For an English writing assignment, that's cheating. Banning AI tools like Grammarly for both is probably the best solution as your physics paper now becomes an extra training exercise for your English paper.

    Writing essays isn't just about your ideas. It's also a tool to teach communication skills. The goal of an essay isn't to produce a readable paper, until you start your PhD at least; it's to teach a variety of skills.

    I don't really care about the AI generated spam that fills the corporate world because corporate reports are write-only anyway, but you can't apply what may be tolerated in the professional world to the world of education.

    replies(4): >>41904335 #>>41904981 #>>41905041 #>>41905661 #
    11. bearjaws ◴[] No.41904234{3}[source]
    As someone working in this field, it is simply not closer to 0%

    People keep using these "gotcha" examples and never actually look at the stats for it. I get it, there are some terrible detectors out there, and of course they are the free ones :)

    https://edintegrity.biomedcentral.com/articles/10.1007/s4097...

    GPTZero was correct in most scenarios where they used basic prompts, and only had one false positive.

    We did a comparison of hand reviewed 3,000 9-12th grade assignments and found that GPTZero holds up really well.

    In the same way that plagiarism detectors need a process for review, your educational institution needs the same for AI detection. Students shouldn't be immediately punished, but instead it should be reviewed, and then an appropriate decision made by a person.

    replies(1): >>41904911 #
    12. Spivak ◴[] No.41904335{3}[source]
    > For an English writing assignment, that's cheating

    It's still not cheating. English assignments aren't about the practice of writing English, you stop doing that in primary school. It's analysis of English texts in which people have been using spelling and grammar checkers since their inception. It's not even cheating to have someone proofread and edit your paper, it's usually encouraged, and Grammarly is just a worse-than-human editor.

    13. willy_k ◴[] No.41904534[source]
    Plagiarism checkers are much more interpretable.
    14. Ukv ◴[] No.41904911{4}[source]
    > https://edintegrity.biomedcentral.com/articles/10.1007/s4097...

    > GPTZero was correct in most scenarios where they used basic prompts, and only had one false positive.

    One false positive out of only "five human-written samples", unless I'm misreading.

    Say 50 papers are checked, with 5 being generated by AI. By the rates of GPTZero in the paper, 3 AI-generated papers would be correctly flagged and 9 human-written papers would incorrectly flagged. Meaning a flagged paper is only 25% likely to actually be AI-generated.

    Realistically the sample size in the paper is just far too small to make any real conclusion one way or another, but I think people fail to appreciate the difference between false positive rate and false discovery rate.

    15. aftbit ◴[] No.41904981{3}[source]
    Say the same thing for automated spell check or the little blue grammar highlight built in to Google Docs and I'll buy it.
    16. washadjeffmad ◴[] No.41905041{3}[source]
    I agree, but that needs to be clearly communicated by the faculty in their syllabi, in alignment with college and university understanding. I think it's an under-discussed topic.

    Saying "AI" becomes meaningless if we're all using it to mean different things. If I use computer vision to perform cell counts, or if an ESL student uses deepl to help translate a difficult to express idea, would we be in breach of student conduct?

    The real answer is "ask your professor first", but with how second nature many of these tools have become in P12 education, it may not occur to students that it might be necessary to ask.

    17. gs17 ◴[] No.41905462[source]
    Plagiarism detectors usually tell you what you're accused of ripping off. I remember always seeing it come back telling me how I must have copied my references from other essays on the same subject.
    18. itishappy ◴[] No.41905661{3}[source]
    > For an English writing assignment, that's cheating.

    Whoops, with that little comment I suspect you've invalidated most English papers written in the past 2 decades. Certainly all of mine! Thanks spellcheck.

    replies(1): >>41907029 #
    19. BobaFloutist ◴[] No.41907029{4}[source]
    Grammarly is very different from vanilla spellcheck.
    replies(1): >>41907695 #
    20. itishappy ◴[] No.41907695{5}[source]
    Fair enough. My last exposure to Grammarly was pre-ChatGPT, when it was a lot closer to vanilla spellcheck.

    But I think it's actually not all that different, particularly in the context of "essays teach writing." It used to be human work to analyze sentences for passive voice, remember the difference between there/their/they're, and understand how commas work, but now the computer handles it.

    (Relevant sidenote: Am I using commas correctly here? IDK! I've never fully internalized the rules!)