←back to thread

600 points antirez | 1 comments | | HN request time: 0s | source
Show context
airstrike ◴[] No.44625676[source]
I think all conversations about coding with LLMs, vibe coding, etc. need to note the domain and choice of programming language.

IMHO those two variables are 10x (maybe 100x) more explanatory than any vibe coding setup one can concoct.

Anyone who is befuddled by how the other person {loves, hates} using LLMs to code should ask what kind of problem they are working on and then try to tackle the same problem with AI to get a better sense for their perspective.

Until then, every one of these threads will have dozens of messages saying variations of "you're just not using it right" and "I tried and it sucks", which at this point are just noise, not signal.

replies(2): >>44625871 #>>44626129 #
cratermoon ◴[] No.44625871[source]
They should also share their prompts and discuss exactly how much effort went into checking the output and re-prompting to get the desired result. The post hints at how much work it takes for the human, "If you are able to describe problems in a clear way and, if you are able to accept the back and forth needed in order to work with LLMs ... you need to provide extensive information to the LLM: papers, big parts of the target code base ... And a brain dump of all your understanding of what should be done. Such braindump must contain especially the following:" and more.

After all the effort getting to the point where the generated code is acceptable, one has to wonder, why not just write it yourself? The time spent typing is trivial to all the cognitive effort involved in describing the problem, and describing the problem in a rigorous way is the essence of programming.

replies(6): >>44626802 #>>44626827 #>>44626857 #>>44627229 #>>44630616 #>>44634750 #
keeda ◴[] No.44630616[source]
> After all the effort getting to the point where the generated code is acceptable, one has to wonder, why not just write it yourself?

Because it is still way, way, way faster and easier. You're absolutely right that the hard part is figuring out the solution. But the time spent typing is in no way trivial or cognitively simple, especially for more complex tasks. A single prompt can easily generate 5 - 10x the amount of code in a few seconds, with the added bonus that it:

a) figures out almost all the intermediate data structures, classes, algorithms and database queries;

b) takes care of all the boilerplate and documentation;

c) frequently accounts for edge cases I hadn't considered, saving unquantifiable amounts of future debugging time;

d) and can include tests if I simply ask it to.

In fact, these days once I have the solution figured out I find it frustrating that I can't get the design in my head into the code fast enough manually. It is very satisfying to have the AI churn out reams of code, and immediately run it (or the tests) to see the expected result. Of course, I review the diff closely before committing, but then I do that for any code in any case, even my own.

replies(2): >>44630990 #>>44648279 #
gf000 ◴[] No.44630990[source]
> frequently accounts for edge cases I hadn't considered, saving unquantifiable amounts of future debugging time;

And creates new ones you wouldn't even consider before, creating just as much, if not more future debugging :D

replies(4): >>44631155 #>>44632282 #>>44632507 #>>44640703 #
1. sothatsit ◴[] No.44631155{3}[source]
I have actually been surprised at how few subtle bugs like this actually come up when using tools like Claude Code. Usually the bugs it introduces are glaringly obvious, and stem from a misunderstanding of the prompt, not due to the code being poorly thought out.

This has been a surprise to me, as I expected code review of AI-generated code to be much more difficult than it has been in practice. Maybe this has been because I only really use LLMs to write code that is easy to explain, and therefore probably not that complicated. If code is more complicated, then I will write it myself.