←back to thread

469 points samuelstros | 1 comments | | HN request time: 0s | source
Show context
diego_sandoval ◴[] No.44998676[source]
It shocks me when people say that LLMs don't make them more productive, because my experience has been the complete opposite, especially with Claude Code.

Either I'm worse than then at programming, to the point that I find an LLM useful and they don't, or they don't know how to use LLMs for coding.

replies(17): >>44998684 #>>44998692 #>>44998700 #>>44998702 #>>44998714 #>>44998723 #>>44998787 #>>44998796 #>>44998808 #>>44998811 #>>44998815 #>>44998918 #>>44998938 #>>44999026 #>>44999031 #>>45000201 #>>45001208 #
timr ◴[] No.44998723[source]
It depends very much on your use case, language popularity, experience coding, and the size of your project. If you work on a large, legacy code base in COBOL, it's going to be much harder than working on a toy greenfield application in React. If your prior knowledge writing code is minimal, the more amazing the results will seem, and vice-versa.

Despite the persistent memes here and elsewhere, it doesn't depend very much on the particular tool you use (with the exception of model choice), how you hold it, or your experience prompting (beyond a bare minimum of competence). People who jump into any conversation with "use tool X" or "you just don't understand how to prompt" are the noise floor of any conversation about AI-assisted coding. Folks might as well be talking about Santeria.

Even for projects that I initiate with LLM support, I find that the usefulness of the tool declines quickly as the codebase increases in size. The iron law of the context window rules everything.

Edit: one thing I'll add, which I only recently realized exists (perhaps stupidly) is that there is a population of people who are willing to prompt expensive LLMs dozens of times to get a single working output. This approach seems to me to be roughly equivalent to pulling the lever on a slot machine, or blindly copy-pasting from Stack Overflow, and is not what I am talking about. I am talking about the tradeoffs involved in using LLMs as an assistant for human-guided programming.

replies(1): >>44998854 #
ivan_gammel ◴[] No.44998854[source]
Overall I would agree with you, but I start feeling that this „iron law“ isn’t as simple as that. After all, humans have limited „context window“ too — we don’t remember every small detail on a large project we have been working on for several years. Loose coupling and modularity helps us and can help LLM to make the size of the task manageable if you don’t ask it to rebuild the whole thing. It’s not the size that makes LLMs fail, but something else, probably the same things where we may fail.
replies(1): >>44998874 #
timr ◴[] No.44998874{3}[source]
Humans have a limited short-term memory. Humans do not literally forget everything they've ever learned after each Q&A cycle.

(Though now that I think of it, I might start interrupting people with “SUMMARIZING CONVERSATION HISTORY!” whenever they begin to bore me. Then I can change the subject.)

replies(3): >>44998932 #>>44999342 #>>45002630 #
1. BeetleB ◴[] No.44999342{4}[source]
Both true and irrelevant.

I've yet had the "forgets everything" to be a limiting factor. In fact, when using Aider, I aggressively ensure it forgets everything several times per session.

To me, it's a feature, not a drawback.

I've certainly had coworkers who I've had to tell "Look, will you forget about X? That use case, while it look similar, is actually quite different in assumptions, etc. Stop invoking your experiences there!"