←back to thread

159 points jbredeche | 2 comments | | HN request time: 0.4s | source
Show context
cuttothechase ◴[] No.45532033[source]
The fact that we now have to write cook book about cook books kind of masks the reality that there is something that could be genuinely wrong about this entire paradigm.

Why are even experts unsure about whats the right way to do something or even if its possible to do something at all, for anything non-trivial? Why so much hesitancy, if this is the panacea? If we are so sure then why not use the AI itself to come up with a proven paradigm?

replies(7): >>45532137 #>>45532153 #>>45532221 #>>45532341 #>>45533296 #>>45534567 #>>45535131 #
nkmnz ◴[] No.45532221[source]
Radioactivity was discovered before nuclear engineering existed. We had phenomena first and only later the math, tooling, and guardrails. LLMs are in that phase. They are powerful stochastic compressors with weak theory. No stable abstractions yet. Objectives shift, data drifts, evals leak, and context windows make behavior path dependent. That is why experts hedge.

“Cookbooks about cookbooks” are what a field does while it searches for invariants. Until we get reliable primitives and specs, we trade in patterns and anti-patterns. Asking the AI to “prove the paradigm” assumes it can generate guarantees it does not possess. It can explore the design space and surface candidates. It cannot grant correctness without an external oracle.

So treat vibe-engineering like heuristic optimization. Tight loops. Narrow scopes. Strong evals. Log everything. When we find the invariants, the cookbooks shrink and the compilers arrive.

replies(1): >>45534341 #
sarchertech ◴[] No.45534341[source]
We’re in the alchemist phase. If I’m being charitable, the medieval stone mason phase.

One thing worth pointing out is that the pre-engineering building large structures phase lasted a long time, and building collapses killed a lot of people while we tried to work out the theory.

Also it wasn’t really the stone masons who worked out the theory, and many of them were resistant to it.

replies(1): >>45536468 #
1. nkmnz ◴[] No.45536468[source]
While alchemy was mostly para-religious wishful thinking, stone masonry has a lot in common with what I want to express: it‘s the tinkering that is accessible to everyone who can lay their hands onto the tools. But I still think the age of nuclear revolution is a better comparison due to a couple of reasons, most importantly the number of very fast feedback loops. While it might have taken years to even build a new idea from stone, and another couple of years to see if it’s stable over time, we see multi-layered systems of both fast and slow feedback loops in AI-driven software development: academic science, open source communities, huge companies, startups, customers, established code review and code quality tools and standards (e.g. static analysis), feedback from multiple AI-models, activities of regulatory bodies, etc. pp. - the more interactions there are between the elements and subsystems, the better a system becomes at doing the trial-and-error-style tinkering that leads to stable results. In this regard, we’re way ahead of the nuclear revolution, let alone stone masonry.
replies(1): >>45537691 #
2. sarchertech ◴[] No.45537691[source]
The inherently chaotic nature of system makes stable results very difficult. Combine that with the non deterministic nature of all the major production models. Then you have the fact that new models are coming out every few months, and we have no objective metrics for measuring software quality.

Oh and benchmarks for functional performance measurement tend to leak into training data.

Put all those together and I’d bet half of my retirement accounts that the we’re still in the reading chicken entrails phase 20 years from now.