←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.247s | source
Show context
dtnewman ◴[] No.43633873[source]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

replies(34): >>43633957 #>>43634006 #>>43634053 #>>43634075 #>>43634251 #>>43634294 #>>43634327 #>>43634339 #>>43634343 #>>43634407 #>>43634559 #>>43634566 #>>43634616 #>>43634842 #>>43635388 #>>43635498 #>>43635830 #>>43636831 #>>43638149 #>>43638980 #>>43639096 #>>43639628 #>>43639904 #>>43640528 #>>43640853 #>>43642243 #>>43642367 #>>43643255 #>>43645561 #>>43645638 #>>43646665 #>>43646725 #>>43647078 #>>43654777 #
enjo ◴[] No.43640528[source]
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.

Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.

replies(6): >>43640669 #>>43640941 #>>43641433 #>>43642050 #>>43642506 #>>43643150 #
samuel ◴[] No.43641433[source]
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).

I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.

replies(5): >>43641522 #>>43641559 #>>43641901 #>>43643008 #>>43644659 #
Stubbs ◴[] No.43641559[source]
As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.

I don't see the former as that much of a problem.

replies(4): >>43641645 #>>43641924 #>>43642892 #>>43646155 #
shinycode ◴[] No.43641924[source]
Well the extent is much broader from a calculator vs an LLM. Why should I hire you if an agent can do it ? LLM is every job is a calculator and can be replaced. Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent. So all the students who let the LLM do their assignment and learn basically nothing, what’s their value for a company to be hired ? The company will and is just using the agent as well …
replies(6): >>43641992 #>>43642471 #>>43642510 #>>43642792 #>>43642929 #>>43643254 #
jpc0 ◴[] No.43641992[source]
> Why should I hire you if an agent can do it ?

You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?

Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.

As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?

There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.

replies(1): >>43642180 #
shinycode ◴[] No.43642180[source]
I fully agree with you about liability. I was advocating for the other point of view.

Some people argue that it doesn’t matter if there is mistakes (it depends which actually) and with time it will cost nothing.

I argue that if we give up learning and let LLM do the assignments then what is the extent of my knowledge and value to be hired in the first place ?

We hired a developper and he did everything with chatGPT, all the code and documentation he wrote. First it was all bad because from the infinity of answers chatGPT is not pinpointing the best in every case. But does he have enough knowledge to understand what he did was bad ? And then we need people with experience that confronted themselves with hard problems and found their way out. How can we confront and critic an LLM answer otherwise ?

I feel student’s value is diluted to be at the mercy of companies providing the LLM and we might loose some critical knowledge / critical thinking in the process from the students.

replies(2): >>43642562 #>>43663809 #
1. jpc0 ◴[] No.43642562[source]
I agree entirely on your take regarding education. I feel like there is a place where LLMs are useful but doesn't impact learning but it's definitely not in the "discovery" phase of learning.

However I really don't need to implement some weird algorithms myself every time (ideally I am using a well tested Library) but the point is that you learn to be able to but also to be able to modify or compose the algorithm in ways the LLM couldn't easily do.