←back to thread

461 points JumpCrisscross | 1 comments | | HN request time: 0.212s | source
Show context
greatartiste ◴[] No.41901335[source]
For a human who deals with student work or reads job applications spotting AI generated work quickly becomes trivially easy. Text seems to use the same general framework (although words are swapped around) also we see what I call 'word of the week' where whichever 'AI' engine seems to get hung up on a particular English word which is often an unusual one and uses it at every opportunity. It isn't long before you realise that the adage that this is just autocomplete on steroids is true.

However programming a computer to do this isn't easy. In a previous job I had dealing with plagiarism detectors and soon realised how garbage they were (and also how easily fooled they are - but that is another story). The staff soon realised what garbage these tools are so if a student accused of plagiarism decided to argue back then the accusation would be quietly dropped.

replies(14): >>41901440 #>>41901484 #>>41901662 #>>41901851 #>>41901926 #>>41901937 #>>41902038 #>>41902121 #>>41902132 #>>41902248 #>>41902627 #>>41902658 #>>41903988 #>>41906183 #
acchow ◴[] No.41901484[source]
> For a human who deals with student work or reads job applications spotting AI generated work quickly becomes trivially easy. Text seems to use the same general framework (although words are swapped around) also we see what I call 'word of the week'

Easy to catch people that aren't trying in the slightest not to get caught, right? I could instead feed a corpus of my own writing to ChatGPT and ask it to write in my style.

replies(1): >>41901583 #
hau ◴[] No.41901583[source]
I don't believe it's possible at all if any effort is made beyond prompting chat-like interfaces to "generate X". Given a hand crafted corpus of text even current llms could produce perfect style transfer for a generated continuation. If someone believes it's trivially easy to detect, then they absolutely have no idea what they are dealing with.

I assume most people would make least amount of effort and simply prompt chat interface to produce some text, such text is rather detectable. I would like to see some experiments even for this type of detection though.

replies(1): >>41901673 #
hnlmorg ◴[] No.41901673[source]
Are you then plagiarising if the LLM is just regurgitating stuff you’d personally written?

The point of these detectors is to spot stuff the students didn’t research and write themselves. But if the corpus is your own written material then you’ve already done the work yourself.

replies(2): >>41901696 #>>41901754 #
throwaway290 ◴[] No.41901696[source]
LLM is just regurgitating stuff as a principle. You can request someone else's style. People who are easy to detect simply don't do that. But they will learn quickly
replies(2): >>41902120 #>>41903123 #
hnlmorg ◴[] No.41903123[source]
I’ve found LLMs to be relatively poor at writing in someone else’s style beyond superficial / comical styles like “pirate” or “Shakespeare”.

To get an LLM to generate content in your own writing, there’s going to be no substitute for training it on your own corpus. By which point you might as well do the work yourself.

The whole point cheating is to avoid doing the work. Building your own corpus requires doing that work.

replies(1): >>41903410 #
throwaway290 ◴[] No.41903410[source]
I meant you don't need to feed it your corpus if it's good enough at mimicking styles. Just ask to mimic someone else. I don't mean novelty like pirate or shakespeare. Mimic "a student with average ability". Then ask to ramp up authenticity. Or even use some model or service with this built in so you don't even need to write any prompts. Zero effort

You're saying it's not good enough at mimicking styles. others saying it's good enough. I think if it's not good enough today it'll be good enough tomorrow. Are you betting on it not becoming good enough?

replies(1): >>41903656 #
hnlmorg ◴[] No.41903656[source]
I’m betting on it not becoming good enough at mimicking a specific students style without having access to their specific work.

Teachers will notice if students writing style shifts in one piece compared to another.

Nobody disputes that you can get LLMs to mimic other people. However it cannot mimic a specific style it hasn’t been trained on. And very few people who are going to cheat are going to take the time to train an LLM on their writing style since the entire point of plagiarism is to avoid doing work.

replies(1): >>41904878 #
throwaway290 ◴[] No.41904878[source]
How would the teacher know what student's style is if she always uses the LLM? Also do you expect that student's style is fixed forever or teachers are all so invested that they can really tell when the student is trying something new vs use an LLM that was trained to output writing in the style of an average student?

Imagine the teacher saying "this is not your style it's too good" to a student who legit tried killing any motivation to do anything but cheat for remaining life

replies(1): >>41905280 #
hnlmorg ◴[] No.41905280[source]
> How would the teacher know what student's style is if she always uses the LLM?

If the student always uses LLMs then it would be pretty obvious by the fact that they’re failing at the cause in all bar the written assessments (ie the stuff they can cheat on).

> Also do you expect that student's style is fixed forever

Of course not. But people’s styles don’t change dramatically on one paper and reset back afterwards.

> teachers are all so invested that they can really tell when the student is trying something new vs use an LLM that was trained to output writing in the style of an average student?

Depends on the size of the classes. When I was at college I do know that teachers did check for changes in writing styles. I know this because one of the kids on my class was questioned about his changes in his writing style.

With time, I’m sure anti-cheat software will also check again previous works by the students to check for changes in style.

However this was never my point. My point was that cheaters wouldn’t bother training on their own corpus. You keep pushing the conversation away from that.

> Imagine the teacher saying "this is not your style it's too good" to a student who legit tried killing any motivation to do anything but cheat for remaining life

That’s how literally no good teacher would ever approach the subject. Instead they’d talk about how good the paper was and ask about where the inspiration came from.

replies(2): >>41906979 #>>41911940 #
throwaway290 ◴[] No.41911940[source]
> pretty obvious by the fact that they’re failing at the cause in all bar the written assessments (ie the stuff they can cheat on).

performing badly under pressure is not a thing in your world

> My point was that cheaters wouldn’t bother training on their own corpus. You keep pushing the conversation away from that.

My point was cheaters don't need to train on their corpus. That's why it's zero effort. You keep trying to wave that away

> That’s how literally no good teacher would ever approach the subject.

Now we only need to eliminate bad teachers

replies(1): >>41913974 #
hnlmorg ◴[] No.41913974[source]
>performing badly under pressure is not a thing in your world

No need to be rude.

Pressure presents different characteristics. Plus lecturers would be working with failing students so would understand the difference between pressure and cheating.

> My point was cheaters don't need to train on their corpus. That's why it's zero effort. You keep trying to wave that away

My entire point was that most cheats wouldn't bother training their corpus!

With the greatest of respect, have you actually read my comments?

> Now we only need to eliminate bad teachers

Well that's a whole other discussion :)

replies(1): >>41923623 #
throwaway290 ◴[] No.41923623[source]
> My entire point was that most cheats wouldn't bother training their corpus!

Good, because they don't need a custom corpus to cheat with LLMs with most normal teachers.

And if a teacher reduced your grade saying you are using LLM because your style doesn't match you just report them for it and say you were trying a new style (teacher would probably will be wrong 50% of the time anyway)

replies(1): >>41929099 #
1. hnlmorg ◴[] No.41929099[source]
> Good, because they don't need a custom corpus to cheat with LLMs with most normal teachers.

I think you're underestimating the capabilities of normal teachers. And I say this as someone who a large percentage of their family are teachers.

Also this topic was about using LLMs to spot LLMs. Not teachers spotting LLMs.

> And if a teacher reduced your grade saying you are using LLM because your style doesn't match you just report them for it and say you were trying a new style (teacher would probably will be wrong 50% of the time anyway)

You're drifting off topic again. I'm not going to discuss handling false positives because that's going to come down the policies of each institution.