←back to thread

323 points timbilt | 1 comments | | HN request time: 0.209s | source
Show context
freedomben ◴[] No.42129872[source]
I sit on my local school board and (as everyone knows) AI has been whirling through the school like a tornado. I'm concerned about student using it to cheat, but I'm also pretty concerned about how teachers are using it.

For example, many teachers have fed student essays into ChatGPT and asked "did AI write this?" or "was this plagiarized" or similar, and fully trusting whatever the AI tells them. This has led to some false positives where students were wrongly accused of cheating. Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.

Overall though I'm not super worried because I do think most people are learning to be skeptical of LLMs. There's still a little too much faith in them, but I think we're heading the right direction. It's a learning process for everyone involved.

replies(6): >>42129908 #>>42131460 #>>42132586 #>>42133211 #>>42133346 #>>42134425 #
wdutch ◴[] No.42132586[source]
I imagine maths teachers had a similar dilemma when pocket calculators became widely available.

Now, in the UK students sit 2 different exams: one where calculators are forbidden and one where calculators are permitted (and encouraged). The problems for the calculator exam are chosen so that the candidate must do a lot of problem solving that isn't just computation. Furthermore, putting a problem into a calculator and then double checking the answer is a skill in itself that is taught.

I think the same sort of solution will be needed across the board now - where students learn to think for themselves without the technology but also learn to correctly use the technology to solve the right kinds of challenges and have the skills to check the answers.

People on HN often talk about ai detection or putting invisible text in the instructions to detect copy and pasting. I think this is a fundamentally wrong approach. We need to work with, not against the technology - the genie is out of the bottle now.

As an example of a non-chatgpt way to evaluate students, teachers can choose topics chatgpt fails at. I do a lot of writing on niche topics and there are plenty of topics out there where chatgpt has no clue and spits out pure fabrications. Teachers can play around to find a topic where chatgpt performs poorly.

replies(1): >>42136290 #
1. freedomben ◴[] No.42136290[source]
Thank you, you make an excellent point! I very much agree, and I think the idea of two exams is very interesting. The analogy to calculators feels very good, and is very much worth a try!