←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.217s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
1. hobo_in_library ◴[] No.43634327[source]
The challenge is that while LLMs do not know everything, they are likely to know everything that's needed for your undergraduate education.

So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.

Is that necessarily a bad thing? It's mixed: - You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level. - For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.

It's like taking the college weed out classes and shifting those to people in the middle of their career.

Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.

Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past