Most active commenters

    ←back to thread

    559 points Gricha | 13 comments | | HN request time: 0.554s | source | bottom
    1. kderbyma ◴[] No.46213171[source]
    Yeah. I noticed Claud suffers when it reaches context overload - its too opinionated, so it shortens its own context with decisions I would not ever make, yet I see it telling itself that the shortcuts are a good idea because the project is complex...then it gets into a loop where it second guesses its own decisions and forgets the context and then continues to spiral uncontrollably into deeper and deeper failures - often missing the obvious glitch and instead looking into imaginary land for answers - constantly diverting the solution from patching to completely rewriting...

    I think it suffers from performance anxiety...

    ----

    The only solution I have found is to - rewrite the prompt from scratch, change the context myself, and then clear any "history or memories" and then try again.

    I have even gone so far as to open nested folders in separate windows to "lock in" scope better.

    As soon as I see the agent say "Wait, that doesnt make sense, let me review the code again" its cooked

    replies(6): >>46232798 #>>46232866 #>>46232939 #>>46232955 #>>46233047 #>>46233145 #
    2. SV_BubbleTime ◴[] No.46232798[source]
    I’m keeping Claude’s tasks small and focused, then if I can I clear between.

    It’s REAL FUCKING TEMPTING to say ”hey Claude, go do this thing that would take me hours and you seconds” because he will happily, and it’ll kinda work. But one way or another you are going to put those hours in.

    It’s like programming… is proof of work.

    replies(1): >>46232862 #
    3. thevillagechief ◴[] No.46232862[source]
    Yes, this is exactly true. You will put in those hours.
    replies(1): >>46233300 #
    4. someguyiguess ◴[] No.46232866[source]
    There’s definitely a certain point I reach when using Claude code where I have to make the specifications so specific that it becomes more work than just writing the code myself
    5. embedding-shape ◴[] No.46232939[source]
    > Yeah. I noticed Claud suffers when it reaches context overload

    All LLMs degrade in quality as soon as you go beyond one user message and one assistant response. If you're looking for accuracy and highest possible quality, you need to constantly redo the conversations from scratch, never go beyond one user message.

    If the LLM gets it wrong in their first response, instead of saying "No, what I meant was...", you need to edit your first response, and re-generate, otherwise the conversation becomes "poisoned" almost immediately, and every token generated after that will suffer.

    replies(1): >>46233875 #
    6. flowerthoughts ◴[] No.46232955[source]
    There's no -c on the command line, so I'm guessing this is starting fresh every iteration, unless claude(1) has changed the default lately.
    7. snarf21 ◴[] No.46233047[source]
    That has been my greatest stumbling block with these AI agents: context. I was trying to have one help vibe code a puzzle game and most of the time I added a new rule it broke 5 existing rules. It also never approached the rules engine with a context of building a reusable abstraction, just Hammer meet Nail.
    8. rtp4me ◴[] No.46233145[source]
    For me, too many compactions throughout the day eventually lead to a decline in Claude's thinking ability. And, during that time, I have given it so much context to help drive the coding interaction. Thus, restarting Claude requires me to remember the small bits of "nuggets" we discovered during the last session so I find myself repeating the same things every day (my server IP is: xxx, my client IP is: yyy, the code should live in directory: a/b/c). Using the resume feature with Claude simply brings back the same decline in thinking that led me to stop it in the first place. I am sure there is a better way to remember these nuggets between sessions but I have not found it yet.
    replies(1): >>46233244 #
    9. mingus88 ◴[] No.46233244[source]
    Shouldn't you put those things you keep repeating into CLAUDE.md?
    replies(1): >>46233287 #
    10. rtp4me ◴[] No.46233287{3}[source]
    Perhaps, but I already have a CLAUDE.md file for the general coding session. Unique items I stumble upon each day probably should go into another file that can be dynamically updated. Maybe I should create a /slash command for this?

    Edit: Shortly after posting this, I asked Claude the same type of question (namely how to persist pieces of data between each coaching session). I just learned about Claude's Memory System - the ability to store these pieces of data between coding sessions. I learn something new every day!

    11. whatshisface ◴[] No.46233300{3}[source]
    In this vein, one of the biggest time-savers has turned out to be its ability to make me realize I don't want to do something.
    replies(1): >>46235640 #
    12. torginus ◴[] No.46233875[source]
    Yeah, I used to write some fiction for myself with LLMs as a recreational pasttime, it's funny to see how as the story gets longer, LLMs progressively either get dumber, start repeating themselves, or become unhinged.
    13. SV_BubbleTime ◴[] No.46235640{4}[source]
    I get that. But I think the AI-deriders are a bit nuts sometimes because while I’m not running around crying about AGI… it’s really damn nice to change the arguments of a function and have it just go everywhere and adjust every invocation of that function to work properly. Something that might take me 10-30 minutes is now seconds and it’s not outside of its reliability spectrum.

    Vibe coding though, super deceptive!