←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.315s | source
Show context
moojacob ◴[] No.43634527[source]
How can I, as a student, avoid hindering my learning with language models?

I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.

In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.

I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?

replies(8): >>43634589 #>>43634731 #>>43635023 #>>43635194 #>>43635237 #>>43642699 #>>43644974 #>>43646990 #
1. bionhoward ◴[] No.43644974[source]
IMHO yes you’re “losing neurons” and the obvious answer is to stop using Claude. The work you do with them benefits them more than it benefits you. You’re paying them to have conversations with a chatbot which has stricter copyright than you do. That means you’re agreeing to pay to train their bot to replace you in the job market. Does that sound like a good idea in the long term? Anthropic is an actual brain rape system, just like OpenAI, Grok, and all the rest, they cannot be trusted