Most active commenters
  • boznz(3)
  • CRConrad(3)

159 points todsacerdoti | 51 comments | | HN request time: 1.521s | source | bottom
1. boznz ◴[] No.40712390[source]
Humans simply cannot keep the whole stack for a complex system in memory, that is why we abstract layers with APIs etc and generally specialize on one layer only.

My (Sci-Fi) book postulated that an AGI (a real AGI, after all it was Sci-Fi) would simply discard everything the humans wrote and rewrite the complete stack (including later on the Hardware and ISA) in machine code without anything unnecessary for the task and of course totally unreadable to a human. It is an interesting scenario to ponder.

replies(5): >>40712536 #>>40712588 #>>40712846 #>>40714181 #>>40715499 #
2. karmakaze ◴[] No.40712536[source]
OTOH a lot of the abstractions are created because problems aren't solved coherently but rather partitioned into organizational structures adding incidental complexity. The worst are <noun>-services that seem to always do too much/too little or have interface designs that suit itself than needs of clients/consumers. I spend a fair amount of time decomplexifying the org out of architectures when we realize that performance sucks because reasons.

We can do a lot of that discarding of excess by realizing that there's the data that's read, the rules applied, and the data that's written. Everything else is ephemeral and plumbing. Looking at a process that way, and seeing the amount of complexity/abstractions one can ask what all that really necessary? Does it pay for itself in some way? It has to justify its existence beyond implementing the needed process.

I question the value of frameworks sometimes[0].


replies(1): >>40714148 #
3. kashyapc ◴[] No.40712580[source]
A well-written article. On a similar theme of essential vs. accidental complexity: FWIW, I summarised a talk gave by Paolo Bonzini (a Linux/KVM/QEMU maintainer) here in the past: -- A QEMU case study in grappling with software complexity

Although it was in context of QEMU, the lessons from it can be applied to many other projects.

replies(1): >>40715745 #
4. layer8 ◴[] No.40712588[source]
While AIs are able to keep more in their “mind” at the same time than humans, there is still a cost to consider (e.g. token limit). If software requires more effort to change (adding a feature, fixing a bug) due to spaghetti architecture, then that will also add to the cost.

Secondly, we may want to keep software on a complexity level understandable by humans, in order to not become completely dependent on the software engineering AIs.

Thirdly, the same effect that we observe with software written by humans getting to complex to understand by any human, is likely to also occur with AIs of any given capacity. AIs will have to take care, just like humans have to, that the complexity doesn’t grow to exceed their abilities to maintain the software. The old adage “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” probably also applies to AIs.

The essential complexity of software doesn’t depend on whether a human or an AI writes it. AIs may be able to handle more accidental complexity, but it’s unclear if that is an actual benefit under the cost arguments mentioned above. So maybe it will only be useful to create and maintain software with more essential complexity than humans can handle. The question is if we want to depend on such software.

replies(1): >>40719984 #
5. metadat ◴[] No.40712846[source]
I checked out your book link < >, is there a preview chapter or two somewhere? That's cool you wrote a compelling sci-fi but my list is competitive :)
replies(1): >>40714587 #
6. jt2190 ◴[] No.40712917[source]
I feel like this article over-complicates things by using a strange definition of “essential complexity”: That if a user says that something is necessary, than that is essential complexity. Personally I never assume that the user has distilled a problem down to its essence. Regardless, my process looks a lot like what the author recommends: Question assumptions, propose alternatives, eliminate work, etc.
replies(2): >>40712978 #>>40715561 #
7. lantry ◴[] No.40712978[source]
This is addressed in the second half of the article

> Strictly following Moseley and Marks’s definition, the fact that we can get the user (or the customer, or the product owner) to accept a change of requirements, implies that the removed complexity wasn’t essential in the first place.

replies(1): >>40713425 #
8. jt2190 ◴[] No.40713425{3}[source]
Let me editorialize that:

> Strictly following Mosley and Marks’s different definition because it’s strange and I can easily poke holes in it…

Again, I’m not sure why it was necessary to “strictly follow” that bizarre definition of essential complexity, one that seems to define anything a user says as “essential”.

9. userbinator ◴[] No.40713802[source]
Left to their own devices, software engineers would act as the philosophical razor, removing the complexity of the world; automating employees —the engineers themselves included— out of a job; simplifying systems, along with the organizations that own them, out of existence.

A lot of people I've worked with appear to have realised that and taken it to the other extreme --- for them, complexity is what keeps them employed, so they find other excuses for justifying its (and their) existence and continue taking the abstractions beyond 11. Some communities/languages like Enterprise Java and .NET are an example of that; and more recently, JS. The old Stroustrup C++ satire also comes to mind.

replies(1): >>40717455 #
10. smallstepforman ◴[] No.40713897[source]
Complexity builds rockets with thrust vectoring and lands modules on the moon, simplicity is good for fire crackers.

Once you scale past simple prototypes, you need performance and new features. And the architecture stops being simple and complexity eventually creeps in.

I’ve implented 4 iterations of a product from scratch, and eventually they all get complex, even though each one started out with the goal of being simpler than the previous iteration. Yes, iteration #4 is more complex than #1, but it is more performant.

In parallel I’m building a new house, and each iteration of the plans is more complex. You try to manage compromises. You take 2 steps forward, one back. Which way do windows face, can an older person navigate, is there enough storage space, cost, esthetics, where does a dirty dog enter, where is the chimney for preppers, driveway and orchard, septics and wells, drainage and water collection, guest rooms and hot tubs, all on a budget … Simple wont do.

replies(5): >>40714186 #>>40714675 #>>40714742 #>>40716914 #>>40716960 #
11. w10-1 ◴[] No.40713913[source]
Complexity is just one of the many kinds of requirements. And yes, requirements (despite their name) can be somewhat fungible. And if you keep going, you'll think of yourself as a forcing function for humanizing humanity.

For me the best perspective is evolution: what system will it be best for those handling my consequences to have? And the more concrete, the better.

So, not essential complexity, but essential.

12. 082349872349872 ◴[] No.40714148{3}[source]
> there's the data that's read, the rules applied, and the data that's written

and each rule applied can do one of three basic things:

I like to collapse layers that are doing the "same kind" of thing according to the taxonomy above.

replies(1): >>40718025 #
13. philipswood ◴[] No.40714181[source]
I wish we would refactor our stack!

Each level throws away a lot of the effort of the preceding layer.

E.g. after heroic engineering for the OS and memory management systems to present the illusory abstraction of a flat memory address space to the runtime, it almost immediately chops it up into smaller independent "packets" that don't interact directly.

Or the whole storage <-> memory <-> slow cache <-> fast cache <-> register divide.

I think we should be building processors that are made of tens of millions of very simple processors, each with their own memory and with dedicated "network on chip"-like communication electronics. These processors would directly map to language structures like objects/structs/functions.

Essentially flattening the stack so that there is just one level between language and silicon.

14. danybittel ◴[] No.40714186[source]
I'd say these are essential complexities, they are features necessary for the client. Accidental complexity would be, if you for example, assigned the work on your new house to different "teams". Then the "guest room" team also build a drainage, or used a prebuilt drainage, not connected to the drainage the other team built.

To quote a quote from the article: "In my experience most of the complexities which are encountered in systems work are symptoms of organizational malfunctions."

15. rramadass ◴[] No.40714523[source]
Good Article on a very important topic.

Generally, Problem/Systems/Requirements Analysis should be done Top-Down but Solution Design/Implementation should be done Bottom-Up. This approach is key to managing Essential Complexity.

The Analysis starts with the "system-as-a-whole" and uses the scientific reductionist approach to figure out the constituent sub-modules and their interactions. This is applied recursively to sub-modules until we choose to stop at a desired level. This identifies the essential complexity at the module level (eg. need for DSP processing algorithms on certain media RTP/RTCP data flows). But in the process of breaking down the system we also will have identified the essential complexity in the interactions between modules (eg. the SIP/etc. protocol module responsible for setting up the above media data flows) and cross-cutting all modules (eg. performance, logging, exception-handling). So while constructing the solution we now can take care of all the identified essential complexities involved by spreading our functional modules over a layering of policy and mechanism modules (see and This distribution of Function, Policy and Mechanism is what constitutes the Architecture of the System.

16. boznz ◴[] No.40714587{3}[source]
I doubt the first two or three chapters will do it justice. I have put a free to download link on the bottom of that page for the complete book as an ePub, I will keep the link valid for a few days. Enjoy.
replies(2): >>40717193 #>>40721496 #
17. langsoul-com ◴[] No.40714675[source]
You want to start with simple, because simple will become complex overtime. Starting with complicated means it'd only become more complicated.

Simple is also relative though. No point preparing for Google scale, when there's less than 10 users a month. But, if the objective is something like discord, real time comms, then it'd be simpler to start with the correct language and framework for that use case.

18. tristramb ◴[] No.40714742[source]
The vast complexity of the 21st Century has still not managed to land a single person on the Moon.
replies(2): >>40715146 #>>40718173 #
19. CRConrad ◴[] No.40714974[source]
20. TeMPOraL ◴[] No.40715146{3}[source]
That's not on technology, that's on people holding the purse strings only caring about growing their purses. Capitalism grew up, and is a boring old fart now.
replies(1): >>40716879 #
21. octo-andrero ◴[] No.40715494[source]
Good engineers design according to requirements, great engineers challenge them.
22. silon42 ◴[] No.40715499[source]
> Humans simply cannot keep the whole stack for a complex system in memory

But IMO this necessary to fully debug it, unless you can prevent all leaky abstractions.

23. podgorniy ◴[] No.40715561[source]
Essential complexity is what program must model based on user needs. Modelling is done for the end user. That's why author mentioned "user says". It's not like "user understands how to describe what they need" rather "we model what user needs". Users will often ask for impossible or contradicting features and can't understand that without our help. I agree that average user will need a lot of help and work to define what they need/want.

Practical example of my abstract description.

User does not want cdn-cached, region-distributed, lambda-backed resizing and validating image services, usage of the latest image compressing formats with fallback for the older browsers with performance dashboats of speed of loading. User wants to see their profile photo on profile page. Showing that photo is an essential complexity as it comes from the end user needs.

24. JonChesterfield ◴[] No.40715745[source]
Thanks for this. Lessons from reality hit harder than from theory. I'm amused to learn that the distinction between essential and accidental was first described by Aristotle given that millennia later people are still struggling with the notion.
replies(1): >>40723013 #
25. andrelaszlo ◴[] No.40715784[source]
I find it interesting that we often think of "the product" and "the organization" as two separate things.

A common counter-example is when we start seeing (some) bugs and incidents as symptoms of organizational issues. This seems to get more visible with experience (less experience: "I screwed up" vs more experience: "this happened in a certain context, how can we prevent it from happening again?").

Conway's law is a brilliant observation but I think it's a much wider pattern: we build organizations and software at the same time, and they're inextricably linked.

In parallel to software design patterns, we have a bunch of organizational patterns and anti-patterns as well. SRE, agile methods, DevOps. Silos, CYA-ism, 10x developers, ...

Some examples of properties of an organization that directly impacts the properties of software:

- How well the organization embraces risk [0]

- Psychological safety[1]

- Hiring practices and diversity

- The ability of the individuals, teams, departments, etc in an organization to align and work towards the same goal

I'd love to hear some anecdotes of how code and products are directly impacted by these.

0: Risk-driven architecture seems like one attempt at making this explicit


26. worstspotgain ◴[] No.40715809[source]
There's a lot to like in this article, but I would go a step further with its meta-abstractions. At complexity 0, there is no software to be written. In essence, complexity and the software itself are closely related, akin to the relationship between compression and AI. The article's thesis is that minimizing complexity is what engineering is about. That means we're kind of minimizing the total amount of software we have to write and maintain. Makes sense, so far so good.

Next up is architecture. More architecture generally means more complexity in the short term and less in the long term. But that's assuming we know what the long term will look like, which we can often only predict. If we under-provision architecture, there's a good chance we'll get buried in technical debt later on (though by then we may have switched gigs.) Enter the usual Agile arguments favoring the short term.

Last but not least, path-dependency. I think this is where the best designers shine. You might not need all the architecture right away, but you do need to steer clear of dead-ends. These are the bad quick-design decisions that bite you in the rear when the architecture eventually gets refactored in. An extreme example might be the lack of a version field.

IMO, awesome designers are great at predicting where the architecture will be and at forward-provisioning it. They don't need to build the freeway through the middle of town. They'll just plan a park here and a parking lot there, guessing the right spots.

replies(4): >>40717057 #>>40717983 #>>40718103 #>>40758696 #
27. rob74 ◴[] No.40715981[source]
> Now, I want to challenge the notion that essential complexity is irreducible.

> The complexity is accidental, so we can remove it.

Yeah, never mind essential complexity, let's start by removing all the accidental complexity in today's standard web development stack (my estimate: 90% accidental, 10% essential)! Easy, right?

28. wcrossbow ◴[] No.40716816[source]
It is kind of obvious but I think it is worth mentioning out loud regarding whether how to classify something as essential or accidental complexity is that where it ends up falling is only easy to tell in hindsight. Recently I finally realized, after working for almost 3 years in a problem, that it could all be framed as graph problem. This simple change of frame dissolved hundreds of lines of code into `reduce(find_shortest_path(graph, start, end), some_op)` style single lines. Now I understood that what I was previously doing was manually crafting paths through this informally defined graph across nodes I knew had connections. The ratio of comments to code was about 5:1 to make sense of it. Before that I believed all of it was true essential complexity but looking through the right glasses it really wasn't. The lessons for me (some of which I seem to have to constantly relearn) where:

1. It may only look complex because you are not thinking about it in the right way. 2. The right data structure/abstraction has the power to turn the worst spaghetti into a shining beacon of simplicity.

An important corollary is that if it looks complicated is probably because you don't understand it yet. Tautological yes, true, also :D

Not trusting myself on what is essential complexity and insisting on hitting my head against the wall until I found a solution has served me well many times in my life and career.

29. CRConrad ◴[] No.40716879{4}[source]
Capitalism supplanted its father, Mercantilism, in the eighteenth or at the very latest the nineteenth century. It was "an old fart" well before WW2.
replies(1): >>40717232 #
30. Anotheroneagain ◴[] No.40716914[source]
Complexity is anything that makes it hard to understand and modify a system.

I disagree with this. It's overwhelmingly more difficult to design something simple, or modify it so that it stays simple. Coming up with complex solutions is easy, the cost is the time and resources it takes to deal with that monster. Somebody has to read all that, walk through all that, and machine all that, and assemble all that.

Much of progress comes from figuring out simple solutions to problems, which frees up time and resources for other things. It must, in the end take less effort to make and operate the machine than doing it by hand, no matter how much it may seem to the contrary, because otherwise it wouldn't be worth it.

31. loldot ◴[] No.40716960[source]
There's a lot of people that like complexity because it makes them feel like they are doing rocket science, but in fact they are replacing a spreadsheet. Most problems on earth do not have the same complexities as those in space. Sometimes the space solution is really nice and simple though, like velcro or using a pencil instead of a fancy pen.
replies(1): >>40717038 #
32. djeastm ◴[] No.40717038{3}[source]
If the problem you're solving isn't sufficiently complex, there's probably already a solution out there and you're re-inventing the wheel.
replies(2): >>40717321 #>>40718868 #
33. djeastm ◴[] No.40717057[source]
>An extreme example might be the lack of a version field.

Nothing a "IF version is NULL THEN version = 1" can't fix ;)

34. CRConrad ◴[] No.40717193{4}[source]
From your page:

> I have to buy the sequel, then another, then wait, or in some cases wait forever and hope the author, or I, don’t die before finishing it!

So, how old are you -- got burned by A Song of Ice and Fire, or by The Wheel of Time? :-)

Oh yeah, and: Thanks for the download!

replies(1): >>40721750 #
35. TeMPOraL ◴[] No.40717232{5}[source]
I liked the pist-WW2 middle-age crisis - threat of nuclear annihilation was bad, of course, but beyond that, people had ambitious and hopeful visions of the future.
36. loldot ◴[] No.40717321{4}[source]
yes, and most people and businesses are "reinventing a wheel" - not sending people to the moon. Some ways of improving on current solutions to a problem is i.e. making it simpler or cheaper, and as mentioned, a complex problem can also have a simple solution.
37. lioeters ◴[] No.40717455[source]
> Stroustrup C++ satire


Stroustrup: Well, one day, when I was sitting in my office, I thought of this little scheme, which would redress the balance a little. I thought 'I wonder what would happen, if there were a language so complicated, so difficult to learn, that nobody would ever be able to swamp the market with programmers? Actually, I got some of the ideas from X10, you know, X windows. That was such a bitch of a graphics system, that it only just ran on those Sun 3/60 things.. They had all the ingredients for what I wanted. A really ridiculously complex syntax, obscure functions, and pseudo-OO structure. Even now, nobody writes raw X-windows code. Motif is the only way to go if you want to retain your sanity.

Interviewer: You're kidding?

Stroustrup: Not a bit of it. In fact, there was another problem.. Unix was written in 'C', which meant that any 'C' programmer could very easily become a systems programmer. Remember what a mainframe systems programmer used to earn?

Interviewer: You bet I do, that's what I used to do.

Stroustrup: OK, so this new language had to divorce itself from Unix, by hiding all the system calls that bound the two together so nicely. This would enable guys who only knew about DOS to earn a decent living too.

Interviewer: I don't believe you said that ...

Stroustrup: Well, it's been long enough, now, and I believe most people have figured out for themselves that C++ is a waste of time but, I must say, it's taken them a lot longer than I thought it would.

Interviewer: So how exactly did you do it?

Stroustrup: It was only supposed to be a joke, I never thought people would take the book seriously. Anyone with half a brain can see that object-oriented programming is counter-intuitive, illogical and inefficient.


Interviewer: Yes, but C++ is basically a sound language.

Stroustrup: You really believe that, don't you? Have you ever sat down and worked on a C++ project? Here's what happens: First, I've put in enough pitfalls to make sure that only the most trivial projects will work first time. Take operator overloading. At the end of the project, almost every module has it, usually, because guys feel they really should do it, as it was in their training course. The same operator then means something totally different in every module. Try pulling that lot together, when you have a hundred or so modules. And as for data hiding. God, I sometimes can't help laughing when I hear about the problems companies have making their modules talk to each other. I think the word 'synergistic' was specially invented to twist the knife in a project manager's ribs.

Interviewer: I have to say, I'm beginning to be quite appalled at all this. You say you did it to raise programmers' salaries? That's obscene.

Stroustrup: Not really. Everyone has a choice. I didn't expect the thing to get so much out of hand. Anyway, I basically succeeded. C++ is dying off now, but programmers still get high salaries — especially those poor devils who have to maintain all this crap. You do realise, it's impossible to maintain a large C++ software module if you didn't actually write it?

38. xyzzy_plugh ◴[] No.40717983[source]
I like this, but I have trouble with relating architecture to complexity. The "generally" qualifier serves as an apt escape hatch but permit me to ignore that for a moment.

Architecture emerges as the holistic shape of the design, the pattern of patterns, the bones of the system. It's possible to have architecture that reduces complexity by reducing scope and forcing apt compartmentalization into reusable, composable bits. One could perhaps even argue that GoF's Design Patterns attempts to commoditize the language to achieve greater degree of simplicity... but let's not.

I have trouble distinguishing architecture from design here. Imagine a kitchen with many cabinets, all alike. Some cupboards open by pulling a handle, some open when you push the handle, others swing up, or you must unscrew the handle. This is horrible and unintuitive, and surprising. Good architecture like good design eschews surprises (as in shock) and provides intuition by weaving consistency through the system. You build an expectation that things behave a certain way, and they do.

I would posit that good architecture, like good design, does not introduce complexity, rather the complexity is present regardless, and they delete layers of uncertainty and facilitate the necessary clarity to bring the overall objective into focus.

Perhaps there is a complexity cost introduced by adhering to a design or pattern or architecture, but if it's not outweighed by the credit it provides, offsetting overall complexity, then it's bad.

Your observation about path-dependency is on the money most certainly.

replies(1): >>40723272 #
39. karmakaze ◴[] No.40718025{4}[source]
In my description the 'rules' are strictly "crunch code"--both "glue" and "parsley code" would be considered plumbing, with the latter being across systems.
40. facundo_olano ◴[] No.40718103[source]
I really like this metaphor of provisioning architecture and staying clear of dead ends. I'd say dead ends come not just from bad quick design decisions but also from wrong predictions (in the scenario you describe where it's hard to tell what the long term looks like ---which is most of the times).

> awesome designers are great at predicting where the architecture will be and at forward-provisioning it.

And this matches the No Silver Bullet conclusion.

41. rikthevik ◴[] No.40718173{3}[source]
> has still not managed to land a single person on the Moon.

How about, "has still not _decided_ to land a single person on the Moon."

42. lucianbr ◴[] No.40718868{4}[source]
You're arguing that every single company should be a monopoly, and nobody should ever try to compete with an existing one. Never improve anything either, unless you can do 10 times better, and only if the 10x is very complicated. Makes zero sense.
43. flakiness ◴[] No.40719909[source]
Dan Luu wrote a piece called "Against essential and accidental complexity".

Basically what he says is that an essential-looking complexity can be solved by the technical advances.

And I like his take more than the ones that cite this quarter century-old piece as if it were a bible, even if there is a truth in it.

44. ◴[] No.40719984{3}[source]
45. metadat ◴[] No.40721496{4}[source]
Wow that's generous and kind of you, thanks!
46. boznz ◴[] No.40721750{5}[source]
Those are the ones most people who started reading fantasy in the last century remember. I started reading in 1979 and there are quite a few other good unfinished series out there.

There are also quite a few modern ones which start awesome, then fizzle out into never-ending soap operas once the author realizes they are on the gravy train and those are even more disappointing IMHO.

Books need endings :)

47. kashyapc ◴[] No.40723013{3}[source]
Yeah, I didn't know it either that it was Aristotle until I started researching for the article! :-)
48. worstspotgain ◴[] No.40723272{3}[source]
Agreed, and well put. I should have emphasized the role of uncertainty in the architecture/complexity relationship.

In your example, you have a minimum of 4 cabinets. If you start out with N>=4, architecture and complexity are inversely related (to a point of course.) They are sometimes directly related when 1<N<4, and most often are when N=1.

We're predicting the future path of N. Adding a subsystem and a formal internal API when N=1 is being bullish on N, trading the present for the future and expecting a good rate of return.

49. thomastjeffery ◴[] No.40739428[source]
Programming answers two questions: "what?" and "how?".

The essential complexity is the inherent incompatibility of any given answer. Every answer must be written in a specific implementation. Once you have written your answer, you have cemented it into its environment. We can't answer answers. Technically we can, but it tends to be a huge undertaking.

Each answer to "what?" and "how?" has a special property: it is context-free. We have to choose a specific "context-free grammar" to write it in, but the answer itself can be fully expressed there. That means that every implementation that answers the same question must be somehow equivalent. That equivalence is, unfortunately, lost at time of writing.


We need to take a step back, and recognize the ultimate question: "why?".

The very questions "what?" and "how?" belong to the answer to "why?". If we could just write the reason why, we could compile that answer into a complete collection of compatible whats and hows.

That's the trickiest part of all, because the answer to "why?" is context-dependent. We can't write the answer to "why?" in any programming language, because that category of language cannot express context-dependence. That means we can't write a parser for it, let alone compile.

Solve natural language processing, and we solve incompatibility.

50. coldtea ◴[] No.40758696[source]
>More architecture generally means more complexity in the short term and less in the long term.

I'm not convinced of the latter. More architecture = more complexity in both short and long term, everything else being equal.

That is, if you can meet the required feature set with LESS architecture, it will be simpler both short and long term, compared to more architecture (which, since it's not essential for meeting the feature set, it would be adding abstractions and YAGNI features).