←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.361s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
sally_glance ◴[] No.43638980[source]
I think this is a structural issue. Universities right now are trying to justify their existence - universities of the past used to be sites of innovation.

Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does. Don't do silly textbook challenges for exams anymore - reestablish a culture of scientific innovation!

replies(3): >>43639000 #>>43639022 #>>43651691 #
1. never_inline ◴[] No.43651691[source]
> Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does.

LLMs can't produce intellectual rigour. They get fine details wrong every time. So indeed using ChatGPT for doing your reasoning for you produces inferior results. By normalising non-rigorous yet correct sounding answers, we drive down the expectations.

To take a concrete example, if you tell a student to implement memcpy with chatgpt, and it will just give an answer which uses uint64 copying. The student has not thought from first principles (copy byte by byte? Improve performance? How to handle alignment?). This lack of insight in return to immediate gratification will bite later.

It's maybe not problem for non-STEM fields where this kind of rigor and insight is not required to excel. But in STEM fields, we write programs and prove theorems for insight. And that insight and the process of obtaining it is gone with AI.