←back to thread

358 points andrewstetsenko | 2 comments | | HN request time: 0.417s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
1. mrbungie ◴[] No.44362811[source]
Actually, you're not missing anything. The thing is, hype cycles are just that, cycles. They come around with a mix of genuine amnesia, convenient amnesia, and junior enthusiasm, because cycles require a society (and/or industry) both able and willing to repeat exploration and decisions, whether they end up in wins or losses. Some people start to get get the pattern after a while but they are seen as cynics. After all, the show must go on, "what if this or the next cycle is the one that leads us to tech nirvana?"

Software engineering for any non-trivial problem means a baseline level of essential complexity that isn't going away, no matter the tool, not even if we someday "code" directly from our minds in some almost-free way via parallel programming thought diffusion. That's because (1) depth and breadth of choice; and (2) coordination/socials, mostly due but not uniquely related to (1) are the real bottlenecks.

Sure, accidental complexity can shrink, if you design in a way that's aligned with the tools, but even then, the gains are often overhyped. These kinds of "developer accelerators" (IDEs, low-code platforms, etc.) are always oversold in depth and scope, LLMs included.

The promise of the "10x engineer" is always there, but the reality is more mundane. For example, IDEs and LSPs are helpful, but not really transformative. Up to a point that people are being payed right now and don't use them at all, and they still deliver in a "economically justifiable" (by someone) way.

Today it's LLMs. Tomorrow it'll be LISP Machines v2.

replies(1): >>44362840 #
2. pjmlp ◴[] No.44362840[source]
I thought that was Python notebooks. :)