←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.206s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
0xffff2 ◴[] No.43634842[source]
>For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.

replies(5): >>43634915 #>>43634958 #>>43635406 #>>43639090 #>>43644616 #
1. currymj ◴[] No.43644616[source]
since late 2024/early 2025 it now is the case, especially with a reasoning model like Sonnet 3.7, DeepSeek-r1, o3, Gemini 2.5, etc., and especially if you upload the textbook, slides, etc alongside the homework to be cheated on.

most normal-difficulty undergraduate assignments are now doable reliably by AI with little to no human oversight. this includes both programming and mathematical problem sets.

for harder problem sets that require some insight, or very unstructured larger-scale programming projects, it wouldn't work so reliably.

but easier homework assignments serve a valid purpose to check understanding, and now they are no longer viable.