Most active commenters
  • stoperaticless(4)
  • SupremumLimit(3)
  • jjav(3)
  • potato3732842(3)

←back to thread

Understanding how bureaucracy develops

(dhruvmethi.substack.com)
192 points dhruvmethi | 48 comments | | HN request time: 0.831s | source | bottom
1. sevensor ◴[] No.41889622[source]
When you treat every negative outcome as a system failure, the answer is more systems. This is the cost of a blameless culture. There are places where that’s the right answer, especially where a skilled operator is required to operate in an environment beyond their control and deal with emergent problems in short order. Aviation, surgery. Different situations where the cost of failure is lower can afford to operate without the cost of bureaucratic compliance, but often they don’t even nudge the slider towards personal responsibility and it stays at “fully blameless.”
replies(13): >>41890119 #>>41890303 #>>41890339 #>>41890571 #>>41891032 #>>41891181 #>>41891213 #>>41891385 #>>41891417 #>>41893574 #>>41894181 #>>41897147 #>>41903458 #
2. hypeatei ◴[] No.41890119[source]
I've never seen it put so succinctly but this is the issue I have with blameless culture. We can design CI pipelines, linters, whatever it is to stop certain issues with our software from being released but if someone is incompetent, they don't care and will find a way to fuck something up and you can only automate so much.
replies(3): >>41891122 #>>41891907 #>>41894237 #
3. linuxlizard ◴[] No.41890303[source]
>When you treat every negative outcome as a system failure, the answer is more systems.

Holy crap, I'm going to save that quote forever. I have a co-worker who treats every line of bad code committed as a reason to add ever more layers to CI. Yo, we caught it in testing. There's no reason to add another form we have to fill out.

replies(1): >>41898222 #
4. SupremumLimit ◴[] No.41890339[source]
This is a wonderfully insightful comment!

I’ve encountered a similar phenomenon with regard to skill as well: people want to ensure that every part of the software system can be understood and operated by the least skilled members of the team (meaning completely inexperienced people).

But similarly to personal responsibility, it’s worth asking what the costs of that approach are, and why it is that we shouldn’t have either baseline expectations of skill or shouldn’t expect that some parts of the software system require higher levels of expertise.

replies(2): >>41890960 #>>41891632 #
5. poulsbohemian ◴[] No.41890571[source]
But there's also an element where this isn't due to system failure, but rather design. Companies want to make their processes bureaucratic so that you won't cost them money in support and so you won't cancel your subscription - making the process painful is the point. Likewise in government - it isn't that government can't be efficient, it's that there are people and organizations who want it to be encumbered so that they can prove their political point that government is inept. One side wants to create funding for a program, the other side puts in place a ton of controls to make spending the money for the program challenging so they can make sure that the money isn't wasted - which costs more money and we get more bureaucracy.
6. jiggawatts ◴[] No.41890960[source]
There is the reason Haskell or F# are relatively unpopular and Go has a much wider footprint in the industry: high expertise levels don’t scale. You can hire 100 juniors but not 100 seniors all trained up in the same difficult abstractions.

Conversely, one skilled senior can often outperform a hundred juniors using simpler tools, but management just doesn’t see it that way.

replies(2): >>41891423 #>>41893615 #
7. schmidtleonard ◴[] No.41891032[source]
Just one tiny problem: I've played the blame game before. I've worked there. You can't sell me the greener grass on the other side of the road because I've been to the other side of the road and I know the grass there is actually 90% trampled mud and goose shit.

The blame game drives the exact same bureaucratization process, but faster, because all of the most capable and powerful players have a personal incentive to create insulating processes / excuses that prevent them from winding up holding the bag. Everyone in this thread at time of writing is gleefully indulging in wishful thinking about finally being able to hold the team underperformer accountable, but these expectations are unrealistic. Highly productive individuals do not tend to win the blame game because their inclinations are the exact opposite of the winning strategy. The winning strategy is not to be productive, it's to maximize safety margin, which means minimizing responsibility and maximizing barriers to anyone who might ask anything of you. Bureaucracy goes up, not down, and anyone who tries to be productive in this environment gets punished for it.

"Blaming the system" doesn't prevent bureaucracy from accumulating, obviously, but it does prevent it from accumulating in this particular way and for this particular reason.

replies(6): >>41891203 #>>41893243 #>>41893666 #>>41894745 #>>41894956 #>>41905283 #
8. liquidpele ◴[] No.41891122[source]
There’s a 2x2 matrix you can put employees into with one side being smart/idiot and the other being lazy/industrious. There is no greater threat than the industrious idiot.
replies(1): >>41891960 #
9. cyanydeez ◴[] No.41891181[source]
Geez.

Someone has no idea how modern human psychology is the only thing creating any of these super structures and their frailties.

We aren't ever going to be your super ant organism, get over it.

10. tdeck ◴[] No.41891203[source]
This also multiplies with hierarchy. In a blame-based culture, your manager is partly to blame for what you do. Their manager is partly to blame for what your manager does. Therefore everyone in a reporting chain is incentivized through fear to double check your work. That means more sign-off and review and approval processes so that people can avoid any kind of fuckup, and it also often means a toxic environment where everyone is spending at least 20% of their brain power worrying about internal optics which in my experience is not a good thing for people engaged in creative work.
11. jancsika ◴[] No.41891213[source]
> When you treat every negative outcome as a system failure, the answer is more systems.

Eloquently put. Also, likely false.

E.g., soft-realtime audio synthesis applications like Pure Data and Supercollider have had essentially the same audio engines since the 1990s. And users see any missed audio deadline as a negative outcome. More to the point-- for a wide array of audio synthesis/filtering/processing use cases, the devs who maintain these systems consider such negative outcomes as systemic failures which must be fixed by the devs, not the users. I can't think of a more precise example of "blameless culture" than digital artists/musicians who depend on these devs to continue fitting this realtime scheduling peg into the round hole that is the modern (and changing!) multitasking OS.

While there have been some changes over the last 30 years, in no way way have any of these applications seen an explosion in the number of systems they employ to avoid negative outcomes. There's one I can think of in Pure Data, and it's optional.

IMO, there's nothing noteworthy about what I wrote-- it's just one domain in probably many across the history of application development. Yet according to your "law" this is exceptional in the history of systems. That doesn't pass the smell test to me, so I think we need to throw out your ostensible law.

replies(1): >>41891859 #
12. gamblor956 ◴[] No.41891385[source]
A negative outcome is a system failure, even if it is a personal failure that drove the outcome, because that is a failure of the system to prevent personal failures from causing negative outcomes.

You can't stop personal failures from happening because people are people. You can design processes to minimize or eliminate those personal failures from yielding negative outcomes.

replies(1): >>41891886 #
13. tom_ ◴[] No.41891417[source]
The past was allowed to play itself out. Why not the present too?
14. SupremumLimit ◴[] No.41891423{3}[source]
Indeed, specialist knowledge is a real constraint, but I think it’s possible to at least _orient_ towards building systems that require no baseline level of skill (the fast food model I guess) or towards training your staff so they acquire the necessary level of skills to work with a less accessible system. I suspect that the second pathway results in higher productivity and achievement in the long term.

However, management tends to align with reducing the baseline level of skill, presumably because it’s convenient for various business reasons to have everyone be a replaceable “resource”, and to have new people quickly become productive without requiring expensive training.

Ironically, this is one of the factors that drives ever faster job hopping, which reinforces the need for replaceable “resources”, and on it goes.

replies(1): >>41891829 #
15. nox101 ◴[] No.41891632[source]
I'm not sure I understand this position. What I hear is "obscure hard to understand code is good" but as others have said, code will be maintained and modified for years to come and not by the original author so making it easy to understand and follow is usually the recommendation. Even the original programmer will usually find it easier to understand their own code months or years later

Did you mean something else?

replies(2): >>41891794 #>>41894398 #
16. stoperaticless ◴[] No.41891794{3}[source]
Extreme A: every team member is litterally five years old (born 5 years ago)

Extreme B: every collegue is required to read and be able to recite x86, C specifications, postgre manual, and must have IQ 190+.

What is obscure or hard to understand is subjective.

17. stoperaticless ◴[] No.41891829{4}[source]
Also there is no easy way for management to know if somebody has required level of skill.
replies(1): >>41892471 #
18. stoperaticless ◴[] No.41891859[source]
Different kinds of systems?

> devs who maintain these systems

What is meant by “system” here? Computer application? Hardware?

19. AnimalMuppet ◴[] No.41891886[source]
But too much system can also cause negative outcomes, because all that system has a cost, both in money and in time. If you add a protection to prevent every negative outcome, your system will never produce anything at all, which is a negative outcome.

Every check has a cost. For some checks, the cost is more than it prevents. Don't add those checks, even after the negative outcome happens.

20. stoperaticless ◴[] No.41891907[source]
I guess we should not take blameless to the extreme.

Some feedback must exist. (calm, obective and possibly private) Eventually it is up to the manager or manager’s manager to be aware what is happening and take action if critically needed.

21. wffurr ◴[] No.41891960{3}[source]
There’s a quote from a German general:

“ I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent — their place is the General Staff. The next lot are stupid and lazy — they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”

replies(1): >>41903739 #
22. fuzzfactor ◴[] No.41892471{5}[source]
Which is why the most important qualification for a manager is to always consistently put in way more effort than the average worker, and be very, very good at doing things that are not the least bit easy at all.
23. Spivak ◴[] No.41893243[source]
Thank you! A blame focused culture rewards the least amount of risk taking, the most ass covering, and so much useless bureaucracy because you naturally accumulate systems to convert individual blame to collective blame like change review boards and multiple sign-offs for everything. Folks do the bare minimum because that's the safe subset.

I'm never going back to that kind of culture, it's soul crushing.

24. throwawayian ◴[] No.41893574[source]
This approach is what’s caused so many cybersecurity, privacy and preventable data breaches.

When everyone is responsible, nobody is.

25. jjav ◴[] No.41893615{3}[source]
> Conversely, one skilled senior can often outperform a hundred juniors using simpler tools, but management just doesn’t see it that way.

Management is correct, if that's the question.

In some very rare bleeding edge cases it is true. Everyone wants to think their company is working on those areas. But here's the truth: your company (for any "you") is actually not.

If you're writing code that is inventing new techniques and pushing the hardware to limits not before imagined (say, like John Carmack) then yes, a single superstar is going to outperform a hundred juniors who simply won't be able to do it, ever.

Asymptotically close to 100% of software jobs are not like that (unfortunately). They're just applying common patterns and libraries to run of the mill product needs. A superstar can outperform maybe 3-4 juniors but that's about it. The jobs isn't that hard and there are only so many hours in a day.

This is made worse today because neither quality nor performance matter anymore (which is depressing, but true). It used to be the software had to work fast enough on normal hardware and if it had bugs it meant shipping new media to all customers which was expensive. So quality and performance mattered. Today companies test everything in production and are continuously pushing updates and performance doesn't matter because you just spin up 50 more instances in AWS if one won't do (let the CFO worry about the AWS bill).

replies(2): >>41894776 #>>41894867 #
26. ◴[] No.41893666[source]
27. chikere232 ◴[] No.41894181[source]
This might be true if your only options are "find someone to blame" or "add more bureocratic process", but in a lot of cases you also have the option "fix the technology"

Even in aviation and surgery, improving the technology tends to be more effective than firing the pilot/surgeon or adding more paperwork. If you find there's a button that crashes the plane, fix the button. Don't fire the pilot or add another hour of education on how to not press that button.

28. chikere232 ◴[] No.41894237[source]
Everyone, including the most competent, makes mistakes though.

If single small mistakes have disastrous consequences, the system is probably too brittle. Approaching it from a blameless angle gives you a better chance of fixing it, as people will cooperate to fix the issue rather than be busy not getting fired

You can still identify and fire/relocate/retrain incompetent people, but that is better done as a continuous thing than as a response to a mishap

29. SupremumLimit ◴[] No.41894398{3}[source]
Yes, I meant something else, and of course I'm not advocating for hard to understand code. However, as the sibling comment suggests, what's obscure or hard is relative.

The problem with indiscriminate application of "code has to be easy to understand" is that it can be used to make pretty much anything, including most features of your language, off limits. After all, a junior developer may not be familiar with any given feature. Thus, we can establish no reasonable lower bound on allowed complexity using such a guideline.

Conversely, what’s too simple or too difficult is very specific to the person. Somebody who’s coming to a junior developer role from a data science background might have no problem reading 200 lines of SQL. Somebody with FP background might find data transformation pipelines simple to understand but class hierarchies difficult, and so on. So the "easy to understand for anyone" guideline proves less than useful for establishing an upper bound on allowed complexity as well.

Therefore, I find that it’s more useful to talk about a lower and upper bound of what’s required and acceptable. There are things we should reasonably expect a person working on the project to know or learn (such as most language features, basic framework features, how to manipulate data, how to debug etc.) regardless of seniority. On the other hand, we don’t want to have code that’s only understood by one or two people on the team, so perhaps we say that advanced metaprogramming or category theory concepts should be applied very sparingly.

Once that competency band is established, we can work to bring everyone into the band (by providing training and support) rather than trying to stretch the band downwards to suit everyone regardless of experience.

replies(1): >>41903901 #
30. yunohn ◴[] No.41894745[source]
Yep, this is accurate IME.

In modern corporate blameless culture, nobody takes the blame. Now this has its own variety of issues, it’s not perfect. But if you look at blame culture, then exactly like OP said, you have to stop building and start protecting. You know who has time for that? The underperforming lazy employee.

replies(1): >>41895624 #
31. aleph_minus_one ◴[] No.41894776{4}[source]
> A superstar can outperform maybe 3-4 juniors but that's about it. The jobs isn't that hard and there are only so many hours in a day.

There do exist (I would even claim "quite some") jobs/programming tasks where superstars are capable of, but a junir developer will at least need years of training to be able so do/solve them (think, for example, of turning a deep theoretical breakthrough in (constructive) mathematics into a computer program; or think of programming involving deep, obscure x86 firmware trivia), but I agree with your other judgement that such programming tasks are not very common in industry.

replies(1): >>41895343 #
32. fcatalan ◴[] No.41894867{4}[source]
Programming doesn't happen in a vacuum, and experience and institutional knowledge can account for many orders of magnitude of performance. A trivial example/recent anecdote:

The other day, two of our juniors came to see me, they had been stumped by the wrong result of a very complex query for 2 hours. I didn´t event look at the query, just scrolled down the results for 10 seconds and instantly knew exactly what was wrong. This is not because I'm better at SQL than them or a Carmack level talent. This is because I've known the people in the results listing for basically all my life, so I instantly knew who didn´t belong there and very probably why he was being wrongly selected.

Trivial, but 10 seconds vs. 4 man hours is quite the improvement.

replies(1): >>41898071 #
33. willcipriano ◴[] No.41894956[source]
People who aren't the getting blamed all the time call it accountability culture rather than blame culture.

Some people want to be holding the bag, if the bag is full of money. All risk no reward won't attract accountable people.

replies(1): >>41895601 #
34. ifyoubuildit ◴[] No.41895343{5}[source]
You don't even need to go to rocket science for this.

3-10 juniors can make a massive expensive mess of a crud app that costs $x0k a month in amazon spend and barely works, while someone who knows what they're doing could cobble it together on a lamp stack running under their desk for basically nothing.

Knowledge/skills/experience/ can have massive impact.

replies(1): >>41897117 #
35. ryandrake ◴[] No.41895601{3}[source]
This is why CEOs and other very senior leadership people have no problem accepting “blame.” Because their contracts are set up so they get even richer no matter what they do! If your company does well, the CEO takes credit and becomes even more fabulously wealthy. If your company does poorly, the CEO takes the blame and leaves on a golden parachute, becoming only moderately more wealthy. Either way, they become more wealthy.

If screwing up my job meant getting fired with a $5M golden parachute, I would be more than happy to be assigned individual blame!

replies(1): >>41903499 #
36. scott_w ◴[] No.41895624{3}[source]
I want to offer a mild counter which is that blameless post mortems shouldn’t mean people escape accountability for misconduct. Only that we focus on how to improve systems.

If, as an accountable leader, you realise that someone ignored the processes and protections, you still have the right to hold them accountable for that. If someone is being lazy, it’s your job to identify that and fire that person.

I won’t pretend it’s easy, and I fully appreciate organisations struggle to make that happen for the reasons you and the article raise.

replies(1): >>41895659 #
37. yunohn ◴[] No.41895659{4}[source]
I’m not advocating for avoiding accountability for misconduct/malice - but in most companies, things are convoluted enough that individual blame is often misplaced, one is always juggling various limitations and issues trying to deliver.

However, the broader problem I have with blame-focus is that it only applies to Individual Contributor roles. I’ve never heard of middle management being held accountable for any actions whatsoever. And obviously not for less “egregious” misconduct like toxicity, workload, favoritism, etc. Heck middle managers can be completely ignorant of their reports’ actual work and survive for decades.

In my experience at FAANG, the worst of managers will get reassigned to a different team, and maybe have their promotion delayed. Occasionally, I’ve seen VPs get put on nearly a year of gardening leave after major misconduct like sexual harassment - and then they leave and become a C level at a smaller company. And of course, CEOs are fired only for complete mismanagement and company failure - and that’s a very high bar and can take forever until shareholders loudly complain.

Basically, my point is that you can only blame the actual workers at the end of the chain - everyone else along the way is easily shielded and escapes blame.

replies(1): >>41896238 #
38. scott_w ◴[] No.41896238{5}[source]
> I’m not advocating for avoiding accountability for misconduct/malice - but in most companies, things are convoluted enough that individual blame is often misplaced, one is always juggling various limitations and issues trying to deliver.

I didn’t think you were advocating for the situation that occurs. I was merely proposing that “blameless” processes are possibly mis-assigned blame (heh) for company cultures that become centred around ducking accountability.

39. jjav ◴[] No.41897117{6}[source]
> 3-10 juniors can make a massive expensive mess of a crud app that costs $x0k a month in amazon spend and barely works, while someone who knows what they're doing could cobble it together on a lamp stack running under their desk for basically nothing.

Yes! Absolutely. It will be faster and more reliable and an order of magnitude (or more) cheaper.

Alas, I'm slowly (grudginly and very slowly) coming to terms accepting that absolutely nobody cares. Companies are happy to pay AWS 100K/mo for that ball of gum that becomes unresponsive four times a day, rather than pay for one expert to build a good system.

40. oooyay ◴[] No.41897147[source]
I feel like this comment is emblematic of a dramatic misunderstanding of blameless post mortems. They're pretty simple; systems that fail can be attributed to teams, practices, systems of understanding, etc which is diametrically opposed to individuals. Blameless culture isn't a culture without blame, in fact there's plenty of blame that should be listed in contributing factors - including the accountable team (if there is one). There's just no, "John Smith did x and y failed" because that's rarely, if ever, succinctly how systems fail.
41. jjav ◴[] No.41898071{5}[source]
> Trivial, but 10 seconds vs. 4 man hours is quite the improvement.

Sure. But now try sustaining that impact multiplier every minute, 8 hours a day for a year.

replies(1): >>41903576 #
42. cma ◴[] No.41898222[source]
Why does CI require forms?
43. potato3732842 ◴[] No.41903458[source]
Part of the problem is the asymmetry between defined concentrated harm and diffuse hard to quantity loosely spread harm.

It's easy to quantify the harm of any specific failure. It's hard to quantify the harm of incentivizing people who can fly by the seat of their pants (metaphorically and literally) and generally succeed out of an industry and incentivizing button pushers, checklist runners and spreadsheet fillers into an industry. Say nothing of the fact that a bureaucracy built of these people has every incentive not to study it and to find in their own favor if they ever do.

44. potato3732842 ◴[] No.41903499{4}[source]
CEOs get way f-ing richer when they succeed than they do when they pull the golden parachute cord.

Everyone's time is finite. Would you rather spend a few years to make high five figures slogging through a failure of mid 6s succeeding? It's the same mental calculation but with more 0s in to the left of the decimal.

45. potato3732842 ◴[] No.41903576{6}[source]
That's a red herring. It's a question of how open that multiple has to show up to make one approach or the other the winner.
46. liquidpele ◴[] No.41903739{4}[source]
Ah that’s the original, I was going off an old memory.
47. jeegsy ◴[] No.41903901{4}[source]
> Once that competency band is established, we can work to bring everyone into the band (by providing training and support) rather than trying to stretch the band downwards to suit everyone regardless of experience.

Great point. This would also apply in the context of DEI hiring initiatives.

48. rightbyte ◴[] No.41905283[source]
> The winning strategy is not to be productive, it's to maximize safety margin, which means minimizing responsibility and maximizing barriers to anyone who might ask anything of you.

Ye. You quickly learn that with Scrum. Pad alot and report the amount of hours estimated at an even pace. Make the burn down line straight.

People doing actual work have a big disadvantage versus those that put their time into manipulating the setup, if the bosses are oblivious. It is really frustrating as a newgrad until you realize how it is done.