←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.001s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
bboygravity ◴[] No.43635830[source]
I don't get this reasoning. Without LLMs I would learn how to write sub-optimal code that is somewhat functional. With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster. On top of that it always makes dumb mistakes which forces you to actually understand what it's spitting out to get it to work properly. Again: that helps with learning.

The fact that you can ask it for a solution for exactly the context you're interested in is amazing and traditional learning doesn't come close in terms of efficiency IMO.

replies(3): >>43635984 #>>43640019 #>>43640050 #
dingnuts ◴[] No.43635984[source]
> With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster.

No, you see a plausible set of tokens that appear similar to how it's done, and as a beginner, you're not able to tell the difference between a good example and something that is subtly wrong.

So you learn something, but it's wrong. You internalize it. Later, it comes back to bite you. But OpenAI keeps the money for the tokens. You pay whether the LLM is right or not. Sam likes that.

replies(2): >>43638780 #>>43639717 #
1. Spivak ◴[] No.43638780[source]
This makes for a good sound bite but it's just not true. The use case of "show me what is a customary solution to <problem>" plays exactly into LLMs strength as a funny kind of search engine. I used to (and still do) search public code for this use case to get a sense of the style and idioms common in a new language/library and the plausible set of tokens is doing exactly that.