←back to thread

116 points benterix | 1 comments | | HN request time: 0s | source
Show context
sinkol ◴[] No.45165687[source]
It's always humbling to re-read Brooks. His central thesis—that the real difficulty is the "essential complexity" of fashioning conceptual structures, not the "accidental" complexity of our tools—has held up for decades. As many in this thread have noted, it feels more relevant than ever.

Brooks masterfully identified the core of the problem, including the "invisibility" of software, which deprives the mind of powerful geometric and spatial reasoning tools. For years, the industry's response has been better human processes and better tools for managing that inherent complexity.

The emerging "agentic" paradigm might offer the first fundamentally new approach to tackling the essence itself. It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.

The idea is to shift the role of the senior developer from a "master builder," who must hold the entire invisible, complex structure in their mind, to a *"Chief Architect" of an autonomous agent crew.*

In this model, the human architect defines the high-level system logic and the desired outcomes. The immense cognitive load of managing the intricate, interlocking details—the very "essential complexity" Brooks identified—is then delegated to a team of specialized AI agents. Each agent is responsible for its own small, manageable piece of the conceptual structure. The architect's job becomes one of orchestration and high-level design, not line-by-line implementation.

It's not that the complexity disappears—it remains essential. But the human's relationship to it changes fundamentally. It might be the most significant shift in our ability to manage essential complexity since the very ideas Brooks himself proposed, like incremental development. It's a fascinating thing to consider.

replies(4): >>45165768 #>>45168818 #>>45169008 #>>45169021 #
1. ModernMech ◴[] No.45168818[source]
> It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.

I think you're right but people are treating it like the silver bullet. They're saying "actually the AI will just eliminate all the accidental complexity by being the entire software stack, from programming language to runtime environment."

So we use the LLM to write Python, and one day hope that it will just also eliminate all the compilers and codegen sitting between the language and the metal. That's silver bullet thinking.

What LLMs are doing is managing some accidental complexity, but it's adding more. "prompt engineering" and "context engineering" are accidental complexity. The special config files LLMs use are accidental complexity. The peculiarities of how the LLM sometimes hallucinates, can't answer basic questions, and behaves differently based on the time of day or how long you've been using it are accidental complexity. And what's worse, it's stochastic complexity, so even if you get your head around it, it's still not predictable.

So LLMs are not a silver bullet. Maybe they offer a new way of approaching the problem, but it's not clear to me we arrive at a new status quo with LLMs that does not also have more accidental complexity. It's like, we took out the spec sheet and added a bureaucracy. That's not any better.