Most active commenters
  • monkeyelite(3)

←back to thread

358 points andrewstetsenko | 11 comments | | HN request time: 0.485s | source | bottom
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
1. rr808 ◴[] No.44362791[source]
You're missing the part where building a modern website is a huge amount of dev time for largely UI work. Also modern deployment is 100x more complicated than in Brook's day. I'd say 90% of my projects are on these two parts which really shows how productivity has gone down (and AI can fix)
replies(4): >>44362913 #>>44362981 #>>44363152 #>>44363255 #
2. monkeyelite ◴[] No.44362913[source]
This is mostly self inflicted though. We create complex deployments with the promise that the incremental savings will overtake the upfront costs when they rarely do (and the hidden complexity costs).

So it seems AI will just let us stretch further and make more accidentally complex systems.

replies(1): >>44362989 #
3. skydhash ◴[] No.44362981[source]
Modern development is more complex, not more complicated. We’re still using the same categories of tools. What’s changed is the tower of abstraction we put between ourselves and the problem.
4. rezonant ◴[] No.44362989[source]
The value of automation ("complex deployments") is not only incremental cost savings (ie because you don't need to do the work over and over), but also the reduction or outright elimination of human error, which especially in the case of security-sensitive activities like deploying software on the Internet can be orders of magnitude more costly than the time it takes to automate it.
replies(1): >>44363035 #
5. monkeyelite ◴[] No.44363035{3}[source]
That is a benefit of automation. But it does not appear to correlate with tool complexity, or the primary focus of commercial offerings.

E.g the most complex deployments are not the ones that are the least error prone or require the least amount of intervention.

replies(1): >>44363223 #
6. dehrmann ◴[] No.44363152[source]
Back when IE was king and IE6 was still 10% of users, I did frontend web work. I remember sitting next to our designer with multiple browsers open playing with pixel offsets to get the design as close as practically possible to the mockups for most users and good enough for every one else. This isn't something LLMs do without a model of what looks good.
replies(1): >>44363243 #
7. rezonant ◴[] No.44363223{4}[source]
What do you consider a complex deployment?
replies(1): >>44374081 #
8. pjerem ◴[] No.44363243[source]
My current job involves exactly this (thanks not on IE) and AI is, as you said, absolutely bad at it.

And I’m saying this as someone who kind of adopted AI pretty early for code and who learned how to prompt it.

The best way to make AI worth your time is to make it work towards a predictable output. TDD is really good for this : you write your test cases and you make the AI do the work.

But when you want a visual result ? It will have no feedback of any clue, will always answer "Ok, I solved this" while making things worse. Even if the model is visual, giving it screenshots as feedback is useless too.

9. jayd16 ◴[] No.44363255[source]
Can AI fix it? Most of that complexity is from a need to stand out.
replies(1): >>44366047 #
10. spwa4 ◴[] No.44366047[source]
... and approvals. The fact that the vast majority of companies just don't have infrastructure. The only thing that made a dent in that is VMWare.
11. monkeyelite ◴[] No.44374081{5}[source]
1. the degree to which it can prevent me from doing my job if it’s not working

2. The level of expertise and skill required to set it up and maintain