←back to thread

123 points mooreds | 2 comments | | HN request time: 0.012s | source
Show context
lelanthran ◴[] No.45212622[source]
This works until you get to the point that your actual programming skills atrophy due to lack of use.

Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.

replies(7): >>45212731 #>>45212756 #>>45213395 #>>45213636 #>>45213875 #>>45213884 #>>45214429 #
sevensor ◴[] No.45213636[source]
What the article describes is:

1. Learn how to describe what you want in an unambiguous dialect of natural language.

2. Submit it to a program that takes a long time to transform that input into a computer language.

3. Review the output for errors.

Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.

replies(1): >>45214191 #
ako ◴[] No.45214191[source]
No, it sounds like the work of a product manager, you’re just working with agents rather than with developers.
replies(5): >>45214841 #>>45214888 #>>45215106 #>>45221459 #>>45224563 #
sarchertech ◴[] No.45214841{3}[source]
Product managers never get that right though. In practice it always falls back on the developer to understand the problem and fill in the missing pieces.

In many cases it falls on the developer to talk the PM out of the bad idea and then into a better solution. Agents aren’t equipped to do any of that.

For any non trivial problem, a PM with the same problem and 2 different dev teams will produce a drastically different solutions 99 times out of 100.

replies(1): >>45215293 #
ako ◴[] No.45215293[source]
Agree with the last bit, dev teams are even more non-deterministic than LLMs.
replies(1): >>45217824 #
sarchertech ◴[] No.45217824[source]
Dev teams are much less non-deterministic than LLMs. If you ask the same dev team to build the same product multiple times they’ll eventually converge on the producing the same product.

The 2nd time it will likely be pretty different because they’ll use what they learned to build it better. The 3rd time will be better still, but each time after that it will essentially be the same product.

An LLM will never converge. It definitely won’t learn from each subsequent iteration.

Human devs are also a lot more resilient to slight changes in requirements and wording. A slight change in language that wouldn’t impact a human at all will cause an LLM to produce completely different output.

replies(1): >>45218772 #
1. ako ◴[] No.45218772{3}[source]
An LLM within the right context/environment can also converge: just like with humans you need to provide guidelines, rules, and protocols to instruct how to implement something. Just like with humans I’ve used the approach you describe: generate something one until it works the way you want it, then ask it so document insights, patterns and rules, and for the next project instruct it to follow the rules you persisted. Will result in more or less the same project.

Humans are very non deterministic: if you ask me to solve a problem today, the solution will be different from last week, last year or 10 years ago. We’ve learnt to deal with it, and we can also control the non-determinism of LLMs.

And humans are also very prone to hallucinations: remember those 3000+ gods that we’ve created to explain the world, or those many religions that are completely incompatible? Even if some are true, most of them must be hallucinations just by being incompatible to the others.

replies(1): >>45223348 #
2. sarchertech ◴[] No.45223348[source]
That only works with very small projects to the point where the specification document is a very large percentage of the total code.

If you are very experienced, you won’t solve the problem differently day to day. You probably would with a 10 year difference, but you won’t ever be running the next model 10 years out (even if the technology matures), so there’s no point in doing that comparison. Solving the same problem with the same constraints in radically different ways day to day comes from inexperience (unless you’re exploring and doing it on purpose).

Calling what LLMs do hallucinations and comparing it to human mythology is stretching the analogy into absurdity.