←back to thread

395 points pseudolus | 3 comments | | HN request time: 0.45s | source
Show context
lgessler ◴[] No.43639035[source]
I'm a professor at an R1 university teaching mostly graduate-level courses with substantive Python programming components.

On the one hand, I've caught some students red handed (ChatGPT generated their exact solution and they were utterly unable to explain the advanced Python that was in their solution) and had to award them 0s for assignments, which was heartbreaking. On the other, I was pleasantly surprised to find that most of my students are not using AI to generate wholesale their submissions for programming assignments--or at least, if they're doing so, they're putting in enough work to make it hard for me to tell, which is still something I'd count as work which gets them to think about code.

There is the more difficult matter, however, of using AI to work through small-scale problems, debug, or explain. On the view that it's kind of analogous to using StackOverflow, this semester I tried a generative AI policy where I give a high-level directive: you may use LLMs to debug or critique your code, but not to write new code. My motivation was that students are going to be using this tech anyway, so I might as well ask them to do it in a way that's as constructive for their learning process as possible. (And I explained exactly this motivation when introducing the policy, hoping that they would be invested enough in their own learning process to hear me.) While I still do end up getting code turned in that is "student-grade" enough that I'm fairly sure an LLM couldn't have generated it directly, I do wonder what the reality of how they really use these models is. And even if they followed the policy perfectly, it's unclear to me whether the learning experience was degraded by always having an easy and correct answer to any problem just a browser tab away.

Looking to the future, I admit I'm still a bit of an AI doomer when it comes to what it's going to do to the median person's cognitive faculties. The most able LLM users engage with them in a way that enhances rather than diminishes their unaided mind. But from what I've seen, the more average user tends to want to outsource thinking to the LLM in order to expend as little mental energy as possible. Will AI be so good in 10 years that most people won't need to really understand code with their unaided mind anymore? Maybe, I don't know. But in the short term I know it's very important, and I don't see how students can develop that skill if they're using LLMs as a constant crutch. I've often wondered if this is like what happened when writing was introduced, and capacity for memorization diminished as it became no longer necessary to memorize epic poetry and so on.

I typically have term projects as the centerpiece of the student's grade in my courses, but next year I think I'm going to start administering in-person midterms, as I fear that students might never internalize fundamentals otherwise.

replies(1): >>43639937 #
1. fn-mote ◴[] No.43639937[source]
> had to award them 0s for assignments, which was heartbreaking

You should feel nothing. They knew they were cheating. They didn't give a crap about you.

Frankly, I would love to have people failing assignments they can't explain even if they did NOT use "AI" to cheat on them. We don't need more meaningless degrees. Make the grades and the degrees mean something, somehow.

replies(2): >>43640112 #>>43642888 #
2. ◴[] No.43640112[source]
3. globnomulous ◴[] No.43642888[source]
> > had to award them 0s for assignments, which was heartbreaking

> You should feel nothing. They knew they were cheating. They didn't give a crap about you.

Most of us (a) don't feel our students owe us anything personally and (b) want our students to succeed. So it's upsetting to see students pluck the low-hanging, easily picked fruit of cheating via LLMs. If cheating were harder, some of these students wouldn't cheat. Some certainly would. Others would do poorly.

But regardless, failing a student and citing students for plagiarism feel bad, even though basically all of us would agree on the importance and value of upholding standards and enforcing principles of honesty and integrity.