←back to thread

116 points benterix | 6 comments | | HN request time: 0.509s | source | bottom
1. sinkol ◴[] No.45165687[source]
It's always humbling to re-read Brooks. His central thesis—that the real difficulty is the "essential complexity" of fashioning conceptual structures, not the "accidental" complexity of our tools—has held up for decades. As many in this thread have noted, it feels more relevant than ever.

Brooks masterfully identified the core of the problem, including the "invisibility" of software, which deprives the mind of powerful geometric and spatial reasoning tools. For years, the industry's response has been better human processes and better tools for managing that inherent complexity.

The emerging "agentic" paradigm might offer the first fundamentally new approach to tackling the essence itself. It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.

The idea is to shift the role of the senior developer from a "master builder," who must hold the entire invisible, complex structure in their mind, to a *"Chief Architect" of an autonomous agent crew.*

In this model, the human architect defines the high-level system logic and the desired outcomes. The immense cognitive load of managing the intricate, interlocking details—the very "essential complexity" Brooks identified—is then delegated to a team of specialized AI agents. Each agent is responsible for its own small, manageable piece of the conceptual structure. The architect's job becomes one of orchestration and high-level design, not line-by-line implementation.

It's not that the complexity disappears—it remains essential. But the human's relationship to it changes fundamentally. It might be the most significant shift in our ability to manage essential complexity since the very ideas Brooks himself proposed, like incremental development. It's a fascinating thing to consider.

replies(4): >>45165768 #>>45168818 #>>45169008 #>>45169021 #
2. nickdothutton ◴[] No.45165768[source]
Plutonium remains dangerous, although is more easily handled these days with a robot claw controlled by a remote operator.
replies(1): >>45166667 #
3. sinkol ◴[] No.45166667[source]
That's a perfect analogy. The essential complexity (the plutonium) doesn't go away, but our ability to manipulate it from a safer, more strategic distance (the robot claw) is what's changing.

Well put.

4. ModernMech ◴[] No.45168818[source]
> It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.

I think you're right but people are treating it like the silver bullet. They're saying "actually the AI will just eliminate all the accidental complexity by being the entire software stack, from programming language to runtime environment."

So we use the LLM to write Python, and one day hope that it will just also eliminate all the compilers and codegen sitting between the language and the metal. That's silver bullet thinking.

What LLMs are doing is managing some accidental complexity, but it's adding more. "prompt engineering" and "context engineering" are accidental complexity. The special config files LLMs use are accidental complexity. The peculiarities of how the LLM sometimes hallucinates, can't answer basic questions, and behaves differently based on the time of day or how long you've been using it are accidental complexity. And what's worse, it's stochastic complexity, so even if you get your head around it, it's still not predictable.

So LLMs are not a silver bullet. Maybe they offer a new way of approaching the problem, but it's not clear to me we arrive at a new status quo with LLMs that does not also have more accidental complexity. It's like, we took out the spec sheet and added a bureaucracy. That's not any better.

5. 1899-12-30 ◴[] No.45169008[source]
I fear for the day that bot responses are not identifiable by em dashes and "not x but y" structures.
6. gf000 ◴[] No.45169021[source]
This is a misunderstanding of what essential complexity is.

If it could be subdivided into its own small, manageable piece, then we wouldn't really have a problem as human teams either.

But the thing is, composing functions can lead to significantly higher complexity than the individual pieces themselves have -- and in the same vein, a complex problem may not be nicely subdivisible, there is a fix, essential complexity to it, on a foundational, mathematical level.