←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.205s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
1. ehnto ◴[] No.44362713[source]
The split between essential and incidental complexity is a really key insight for thinking about how far AI can be pushed into software development. I think it's likely the detail many developers are feeling intuitively but not able to articulate, in regards to why they won't be replaced just yet.

It's certainly how actually using AI in earnest feels, I have been doing my best to get agents like Claude to work through problems in a complex codebase defined by enormous amounts of outside business logic. This lack of ability to truly intuit the business requirements and deep context requirements means it cannot make business related code changes. But it can help with very small context code changes, ie incidental complexity unrelated to the core role of a good developer, which is translating real world requirements into a system.

But I will add that it shouldn't be underestimated how many of us are actually solving the distribution problem, not technical problems. I still would not feel confident replacing a junior with AI, the core issue being lack of self-correction. But certainly people will try, and businesses built around AI development will be real and undercut established businesses. Whether that's net good or bad will probably not matter to those who lose their jobs in the scuffle.