Most active commenters
  • freedomben(3)

←back to thread

323 points timbilt | 12 comments | | HN request time: 1.115s | source | bottom
1. freedomben ◴[] No.42129872[source]
I sit on my local school board and (as everyone knows) AI has been whirling through the school like a tornado. I'm concerned about student using it to cheat, but I'm also pretty concerned about how teachers are using it.

For example, many teachers have fed student essays into ChatGPT and asked "did AI write this?" or "was this plagiarized" or similar, and fully trusting whatever the AI tells them. This has led to some false positives where students were wrongly accused of cheating. Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.

Overall though I'm not super worried because I do think most people are learning to be skeptical of LLMs. There's still a little too much faith in them, but I think we're heading the right direction. It's a learning process for everyone involved.

replies(6): >>42129908 #>>42131460 #>>42132586 #>>42133211 #>>42133346 #>>42134425 #
2. baxtr ◴[] No.42129908[source]
My takeaway: a chrome plugin that writes LLM generated text into a Google doc over the course of a couple of days is a great product idea!
replies(2): >>42130292 #>>42132140 #
3. magicpin ◴[] No.42130292[source]
It would need to revise it, move text around, write and delete entire sections.
replies(1): >>42130544 #
4. baxtr ◴[] No.42130544{3}[source]
Yes! Great feature requests, thanks
5. alwayslikethis ◴[] No.42131460[source]
If there's something more unethical than AI plagiarism, that's going to be using AI to condemn people for it. I'm afraid that would further devalue actually writing your own stuff, as supposed to iterating with ChatGPT to produce the least AI-sounding writing out of fear of false accusations.
6. dangerwill ◴[] No.42132140[source]
The only use of such a product would be fraudulent. Go ahead, make money, but know you would be a scammer or at best facilitating scammers
7. wdutch ◴[] No.42132586[source]
I imagine maths teachers had a similar dilemma when pocket calculators became widely available.

Now, in the UK students sit 2 different exams: one where calculators are forbidden and one where calculators are permitted (and encouraged). The problems for the calculator exam are chosen so that the candidate must do a lot of problem solving that isn't just computation. Furthermore, putting a problem into a calculator and then double checking the answer is a skill in itself that is taught.

I think the same sort of solution will be needed across the board now - where students learn to think for themselves without the technology but also learn to correctly use the technology to solve the right kinds of challenges and have the skills to check the answers.

People on HN often talk about ai detection or putting invisible text in the instructions to detect copy and pasting. I think this is a fundamentally wrong approach. We need to work with, not against the technology - the genie is out of the bottle now.

As an example of a non-chatgpt way to evaluate students, teachers can choose topics chatgpt fails at. I do a lot of writing on niche topics and there are plenty of topics out there where chatgpt has no clue and spits out pure fabrications. Teachers can play around to find a topic where chatgpt performs poorly.

replies(1): >>42136290 #
8. dtnewman ◴[] No.42133211[source]
Nice! You should check out a free chrome plugin that I wrote for this called revision history. It’s organically grown to 140k users, so the problem obviously resonates (revisionhistory.com).
9. cube00 ◴[] No.42133346[source]
> Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.

It's scary to see the reversal of the burden of proof becoming more accepted.

10. voiper1 ◴[] No.42134425[source]
:sigh:

With all the concern over AI, it's being used _against recommendations_ to detect AI usage? [0][1]

So while the concern for using AI is founded, teachers are so mistaken at understanding what it is and the tech around is that they are using AI in areas it's publicly acknowleded it doesn't work. That detracts from any credibility the teachers have about AI usage!

[0] https://openai.com/index/new-ai-classifier-for-indicating-ai... openai pulled their AI classifier [1] https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-wo...

replies(1): >>42136362 #
11. freedomben ◴[] No.42136290[source]
Thank you, you make an excellent point! I very much agree, and I think the idea of two exams is very interesting. The analogy to calculators feels very good, and is very much worth a try!
12. freedomben ◴[] No.42136362[source]
Oh absolutely, I've spent hours explaining AI to teachers and most of them do seem to understand, but it takes some high-level elaboration about how it works before it "clicks." Prior to that, they are just humans like the rest of us. They don't read fine print or blogs, they just poke at the tool and when it confidently gives them answers, they tend to anthropomorphize the machine and believe what it is saying. It certainly doesn't help that we've trained generations of people to believe that the computer is always right.

> That detracts from any credibility the teachers have about AI usage!

I love teachers, but they shouldn't have any credibility about AI usage in the first place unless they have gained that in the same way the rest of us do. As authority figures, IMHO they should be held to an even higher standard than the average person because decisions they make have an out-sized impact on another person.