←back to thread

323 points timbilt | 1 comments | | HN request time: 0.212s | source
Show context
freedomben ◴[] No.42129872[source]
I sit on my local school board and (as everyone knows) AI has been whirling through the school like a tornado. I'm concerned about student using it to cheat, but I'm also pretty concerned about how teachers are using it.

For example, many teachers have fed student essays into ChatGPT and asked "did AI write this?" or "was this plagiarized" or similar, and fully trusting whatever the AI tells them. This has led to some false positives where students were wrongly accused of cheating. Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.

Overall though I'm not super worried because I do think most people are learning to be skeptical of LLMs. There's still a little too much faith in them, but I think we're heading the right direction. It's a learning process for everyone involved.

replies(6): >>42129908 #>>42131460 #>>42132586 #>>42133211 #>>42133346 #>>42134425 #
voiper1 ◴[] No.42134425[source]
:sigh:

With all the concern over AI, it's being used _against recommendations_ to detect AI usage? [0][1]

So while the concern for using AI is founded, teachers are so mistaken at understanding what it is and the tech around is that they are using AI in areas it's publicly acknowleded it doesn't work. That detracts from any credibility the teachers have about AI usage!

[0] https://openai.com/index/new-ai-classifier-for-indicating-ai... openai pulled their AI classifier [1] https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-wo...

replies(1): >>42136362 #
1. freedomben ◴[] No.42136362[source]
Oh absolutely, I've spent hours explaining AI to teachers and most of them do seem to understand, but it takes some high-level elaboration about how it works before it "clicks." Prior to that, they are just humans like the rest of us. They don't read fine print or blogs, they just poke at the tool and when it confidently gives them answers, they tend to anthropomorphize the machine and believe what it is saying. It certainly doesn't help that we've trained generations of people to believe that the computer is always right.

> That detracts from any credibility the teachers have about AI usage!

I love teachers, but they shouldn't have any credibility about AI usage in the first place unless they have gained that in the same way the rest of us do. As authority figures, IMHO they should be held to an even higher standard than the average person because decisions they make have an out-sized impact on another person.