←back to thread

358 points andrewstetsenko | 2 comments | | HN request time: 0.416s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
crvdgc ◴[] No.44363487[source]
I think the crux is that specification has been neglected since even before AI.

Stakeholders (client, managers) have been "vibe coding" all along. They send some vague descriptions and someone magically gives back a solution. Does the solution completely work? No one knows. It kinda works, but no one knows for sure.

Most of the time, it's actually the programmers' understanding of the domain that fills out the details (we all know what a correct form submission webpage looks like).

Now the other end has become AI, it remains to be seen whether this can be replicated.

replies(4): >>44363550 #>>44363569 #>>44364898 #>>44367765 #
bdangubic ◴[] No.44363569[source]
we all know what a correct form submission webpage looks like

millions of forms around the web would like to have a word… :)

replies(1): >>44366614 #
ivandenysov ◴[] No.44366614[source]
We all know. But we all have a different vision of a ‘correct’ form
replies(1): >>44377198 #
1. bdangubic ◴[] No.44377198[source]
with all due respect but what does this mean? we can either "all know" or we can all not have a clue and have our own "visions" of the correct form, both of these are complete opposites.

decades of building garbage-barely-working forms is a proof that we just do not know (much like we (generally) don't know how to center a div on the page so once-per-year, without fail, top story on HN is "how to center a div" :) ).

replies(1): >>44389738 #
2. ivandenysov ◴[] No.44389738[source]
We all think we know