←back to thread

395 points pseudolus | 1 comments | | HN request time: 0s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
vunderba ◴[] No.43634053[source]
I've been calling this out since the rise of ChatGPT:

"The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the LLM to provide an answer, rather than taking a few moments to quietly ponder the problem on your own. By reaching for it to solve any problem at nearly an instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning."

replies(2): >>43634167 #>>43642927 #
nonethewiser ◴[] No.43634167[source]
Somewhat agree.

I agree in principal - the process of problem solving is the important part.

However I think LLMs make you do more of this because of what you can offload to the LLM. You can offload the simpler things. But for the complex questions that cut across multiple domains and have a lot of ambiguity? You're still going to have to sit down and think about it. Maybe once you've broken it into sufficiently smaller problems you can use the LLM.

If we're worried about abstract problem solving skills that doesnt really go away with better tools. It goes away when we arent the ones using the tools.

replies(1): >>43634424 #
Peritract ◴[] No.43634424[source]
You can offload the simpler things, but struggling with the simpler things is how you build the skills to handle the more complex ones that you can't hand off.

If the simpler thing in question is a task you've already mastered, then you're not losing much by asking an LLM to help you with it. If it's not trivial to you though, then you're missing an opportunity to learn.

replies(2): >>43635350 #>>43635993 #
1. nonethewiser ◴[] No.43635993[source]
If you haven't mastered it yet then its not a simple thing.

Grandma will not be able to implement a simple add function using python by asking chat gpt and copy pasting.