←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.199s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
mynti ◴[] No.44363426[source]
i think the difference is that now someone with no coding knowledge could start describing software and make the agent build that software iteratively. so for example a mechanical engineer wants to build some simulation tool. you still need to define those requirements and understand what you want to do but the work could be (and this is the big if still, if agents become good enough for this sort of work) done by the agent not a humand programmer. i do not see that happening at the moment but still this does change the dynamic. you are right in that it is not a silver bullet and a lot of the complexity is impossible to get rid of. but i wonder if for a lot of use cases there will not be a software engineer in the loop. for bigger systems, for sure, but for a lot of smaller business software?
replies(3): >>44363473 #>>44363511 #>>44366086 #
1. ivan_gammel ◴[] No.44363473[source]
> for a lot of smaller business software?

Small businesses often understand domain less, not more, because they cannot invest as much as big businesses in building expertise. They may achieve something within that limited understanding, but the outcome will limit their growth. Of course, AI can help with discovery, but it may overcomplicate things. Product discovery is an art of figuring out what to do without doing too much or not enough, which AI has not mastered yet.