Most active commenters

    ←back to thread

    395 points pseudolus | 12 comments | | HN request time: 0.504s | source | bottom
    1. moojacob ◴[] No.43634527[source]
    How can I, as a student, avoid hindering my learning with language models?

    I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.

    In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.

    I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?

    replies(8): >>43634589 #>>43634731 #>>43635023 #>>43635194 #>>43635237 #>>43642699 #>>43644974 #>>43646990 #
    2. lunarboy ◴[] No.43634589[source]
    This sounds fine? Copy pasting LLM output without understanding is a short term dopamine hit that only hurts you long term if you don't understand it. If you struggle first, or strategically ping-pong with the LLM to arrive at the answer, and can ultimately understand the underlying reasoning.. why not use it?

    Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.

    replies(1): >>43641884 #
    3. azemetre ◴[] No.43634731[source]
    Can you do all this without relying on any LLM usage? If so then you’re fine.
    4. knowaveragejoe ◴[] No.43635023[source]
    It's a hard question to answer and one I've been mindful of in using LLMs as tutoring aids for my own learning purposes. Like everything else around LLM usage, it probably comes down to careful prompting... I really don't want the answer right away. I want to propose my own thoughts and carefully break them down with the LLM. Claude is pretty good at this.

    "productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.

    5. dwaltrip ◴[] No.43635194[source]
    Only use LLMs for half of your work, at most. This will ensure you continue to solidify your fundamentals. It will also provide an ongoing reality check.

    I’d also have sessions / days where I don’t use AI at all.

    Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.

    replies(1): >>43650567 #
    6. quantumHazer ◴[] No.43635237[source]
    As a student, I use LLMs as little as possible and try to rely on books whenever possible. I sometimes ask LLMs questions about things that don't click, and I fact-check their responses. For coding, I'm doing the same. I'm just raw dogging the code like a caveman because I have no corporate deadlines, and I can code whatever I want. Sometimes I get stuck on something and ask an LLM for help, always using the web interface rather than IDEs like Cursor or Windsurf. Occasionally, I let the LLMs write some boilerplate for boring things, but it's really rare and I tend not to use them too much. This isn't due to Luddism but because I want to learn, and I don't want slop in my way.
    7. namaria ◴[] No.43641884[source]
    > can ultimately understand the underlying reasoning

    This is at the root of the Dunnin-Kruger effect. When you read an explanation you feel like you understand it. But it's an illusion, because you never developed the underlying cognition, you just saw the end result.

    Learning is not about arriving at the result, or knowing the answers. These are by products of the process of learning. If you just short cut to the end by products, you get the appearance of learning. And you might be able to play the system and come out with a diploma. But you didn't actually develop cognitive skills at all.

    8. noisy_boy ◴[] No.43642699[source]
    I don't think the pain of losing points is a good learning incentive, powerful sure but not effective.

    You would learn more if you tell Claude to not give outright answers but generate more problems where you are weak for you to solve. That reduction in errors as you go along will be the positive reinforcement that will work long term.

    replies(1): >>43644776 #
    9. neves ◴[] No.43644776[source]
    I don't know. I remember much more my failures than my successes. There are errors in important tests that I remember for life the correct answer.
    10. bionhoward ◴[] No.43644974[source]
    IMHO yes you’re “losing neurons” and the obvious answer is to stop using Claude. The work you do with them benefits them more than it benefits you. You’re paying them to have conversations with a chatbot which has stricter copyright than you do. That means you’re agreeing to pay to train their bot to replace you in the job market. Does that sound like a good idea in the long term? Anthropic is an actual brain rape system, just like OpenAI, Grok, and all the rest, they cannot be trusted
    11. istjohn ◴[] No.43646990[source]
    I believe conversation is a one of the best ways to really learn a topic, so long as it is used deliberately.

    My folk theory of education is that there is a sequence you need to complete to truly master a topic.

    Step 1: You start with receptive learning where you take in information provided to you by a teacher, book, AI or other resource. This doesn't have to be totally passive. For examble, it could take the form of Socratic questioning to guide you towards an understanding.

    Step 2: Then you digest the material. You connect it to what you already know. You play with the ideas. This can happen in an internal monologue as you read a textbook, in a question and answer period after a lecture, in a study group conversation, when you review your notes, or as you complete homework questions.

    Step 3: Finally, you practice applying the knowledge. At this stage, you are testing the understanding and intuition you developed during digestion. This is where homework assignments, quizes, and tests are key.

    This cycle can occur over a full semester, but it can also occur as you read a single textbook paragraph. First, you read (step 1). Then you stop and think about what this means and how it connects to what you previously read. You make up an imaginary situation and think about what it implies (step 2). Then you work out a practice problem (step 3).

    Note that it is iterative. If you discover in step 3 a misunderstanding, you may repeat the loop with an emphasis on your confusion.

    I think AI can be extremely helpful in all three stages of learning--in particular, for steps 2 and 3. It's invaluable to have quick feedback at step 3 to understand if you are on the right trail. It doesn't make sense to wait for feedback until a teacher's aid gets around to grading your HW if you can get feedback right now with AI.

    The danger is if you don't give yourself a chance to struggle through step 3 before getting feedback. The amount of struggle that is appropriate will vary and is a subtle question.

    Philosophers, mathematicians, and physicists in training obviously need to learn to be comfortable finding their way through hairy problems without any external source of truth to guide them. But this is a useful muscle that arguably everyone should exercise to some extent. On the other hand, the majority of learning for the majority of students is arguably more about mastering a body of knowledge than developing sheer brain power.

    Ultimately, you have to take charge of your own learning. AI is a wonderful learning tool if used thoughtfully and with discipline.

    12. rglynn ◴[] No.43650567[source]
    I definitely catch myself reaching for the LLM because thinking is too much effort. It's quite a scary moment for someone who prides themself on their ability to think.