←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.205s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
1. hamstergene ◴[] No.44363707[source]
An easy answer for what's missing is that the industry isn't run by people who read "No Silver Bullet".

- Articles about tricky nature of tech debt aren't written by people who call the shots on whether the team can spend the entire next week on something that a customer can't see.

- Articles about systems architecture aren't written by people who decide how much each piece of work was significant for business.

- Books on methodologies are optional read for engineers, not essential for their job, and adoption happens only when they push it upwards.

Most of the buzz about AI replacing coding is coming from people who don't see a difference between generating a working MVP of an app, and evolving an app codebase for a decade and fixing ancient poor design choices in a spaceship which is already in flight.

I've even seen a manager who proposed allocating 33% time every day on 3 projects, and engineers had to push back. Such old and common knowledge that it doesn't work is apparently still not a job requirement in 2025. Despite that organizing and structuring project time allocation is management competency and not an engineering skill, it is de-facto entirely up to engineers to make sure it's done right. The same managers are now proud to demonstrate their "customer focus" by proposing to ask AI to resolve all the tech debt and write all the missing tests so that engineers could focus on business requests, and same engineers have to figure how to explain why it didn't just work when they tried.

To talk about complexity is to repeat the same old mistake. I am sure most engineers already know and I am yet to see an experienced engineer who believes their job will be taken by simple prompts. The problem we should be talking about should be titled something like,

"Software Engineering Has Poor Management Problem, and AI is Amplifying It"