←back to thread

358 points andrewstetsenko | 2 comments | | HN request time: 0.397s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
austin-cheney ◴[] No.44364302[source]
> What am I missing here?

A terrifyingly large percentage of people employed to write software cannot write software. Not even a little. These are the people that can be easily replaced.

In my prior line of work I wrote JavaScript for a living. There were people doing amazing, just jaw dropping astounding, things. Those people were almost exclusively hobbyists. At work most people struggled to do little more than copy/paste in a struggle just to put text on screen. Sadly, that is not an exaggeration.

Some people did what they considered to be advanced engineering against these colossal frameworks, but the result is just the same: little more than copy/paste and struggle to put text on screen. Yes, they might be solving for advanced complexity, but it is almost always completely unnecessary and frequently related to code vanity.

Virtually none of those people could write original applications, measure anything, write documentation, or do just about anything else practical.

> So how do we reconcile this?

Alienate your workforce by setting high standards, like a bar exam to become a lawyer. Fire those people that fail to rise to the occasion. Moving forward employ people who cannot meet the high standards only as juniors or apprentices, so that the next generation of developers have the opportunity to learn the craft without rewarding failure.

replies(1): >>44365955 #
1. spwa4 ◴[] No.44365955[source]
> Alienate your workforce by setting high standards, like a bar exam to become a lawyer ...

This would work if the world was willing to pay for software. So at the very least you'd have to outlaw the ad-based business model, or do what lawyers do: things that are absolutely critical for software development (think "program needs to be approved or it won't execute", that deep) that normal people aren't allowed ... and unable ... to do.

replies(1): >>44366316 #
2. austin-cheney ◴[] No.44366316[source]
From a purely economic perspective its all the same whether you are paying for products or paying for people and whether your revenue comes for media or sales. Those cost/profit first concerns are entirely the wrong questions to ask though, because they limit available routes of revenue generation.

The only purpose of software is automation. All cost factors should derive from that one source of truth. As a result the only valid concerns should be:

* Lowering liabilities

* Increasing capabilities

From a business perspective that means not paying money for unintended harms, and simultaneously either taking market share from the competition or inventing new markets. If your people aren't capable of writing software or your only options are free choices provided to you then you are the mercy of catastrophic opportunity costs that even the smallest players can sprint past.