←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.3s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
enjo ◴[] No.43640528[source]
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.

Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.

replies(6): >>43640669 #>>43640941 #>>43641433 #>>43642050 #>>43642506 #>>43643150 #
samuel ◴[] No.43641433[source]
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).

I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.

replies(5): >>43641522 #>>43641559 #>>43641901 #>>43643008 #>>43644659 #
Stubbs ◴[] No.43641559[source]
As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.

I don't see the former as that much of a problem.

replies(4): >>43641645 #>>43641924 #>>43642892 #>>43646155 #
9rx ◴[] No.43641645[source]
> there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.

If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.

But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.

replies(1): >>43654818 #
1. kevindamm ◴[] No.43654818[source]
There is a difference between thinking about the context of a problem and "critical thinking" about the problem or its possible solutions.

There is a measurable decrease in critical thinking skills when people consistently offload the thinking about a problem to an LLM. This is where the primary difference is between solving problems with an LLM vs having it solved for you with an LLM. And, that is cause for concern.

Two studies on impact of LLMs and generative AI on critical thinking:

https://www.mdpi.com/2075-4698/15/1/6

https://slejournal.springeropen.com/articles/10.1186/s40561-...