←back to thread

427 points JumpCrisscross | 2 comments | | HN request time: 0.46s | source
Show context
aftbit ◴[] No.41905060[source]
I haven't seen this discussed as much as I expected - is this even possible? Can a tool be built to - in general - determine if an LLM was used to generate text? Can a human even do it in every case?

_Maybe_ you can detect default ChatGPT-3.5 responses. But if a student does a bit of mucking around with fine-tunes on local llama or uses a less-common public model, can you still tell?

I have a similar question for AI art detectors. Can it actually work? Maybe it works for Midjourney or whatever, but the parameter space of both hand-drawn (on a computer) art and brush-stroke generating models like NeuBE must overlap enough that you could never be sure in a substantial number of cases.

replies(1): >>41905129 #
1. greenavocado ◴[] No.41905129[source]
The only way to be sure a student isn't cheating is to search them before they enter a secure room with nothing in it besides the student, the proctor, some paper, maybe some furniture, and proctor-provided pens or pencils to take an oral or written exam. In this age you can only truly judge a student's mind by observing their synthesis skills in-person.
replies(1): >>41909414 #
2. aftbit ◴[] No.41909414[source]
I agree, but I'll argue that this is not responsive to my question, nor a reasonable goal in general. You cannot be _sure_ that a student isn't cheating without taking draconian measures, but you can likely catch a lot of lazy cheaters by applying imperfect methods. The problem comes when the methods are treated as infallible and there is no appeal process.