Holy crap, I'm going to save that quote forever. I have a co-worker who treats every line of bad code committed as a reason to add ever more layers to CI. Yo, we caught it in testing. There's no reason to add another form we have to fill out.
I’ve encountered a similar phenomenon with regard to skill as well: people want to ensure that every part of the software system can be understood and operated by the least skilled members of the team (meaning completely inexperienced people).
But similarly to personal responsibility, it’s worth asking what the costs of that approach are, and why it is that we shouldn’t have either baseline expectations of skill or shouldn’t expect that some parts of the software system require higher levels of expertise.
Conversely, one skilled senior can often outperform a hundred juniors using simpler tools, but management just doesn’t see it that way.
The blame game drives the exact same bureaucratization process, but faster, because all of the most capable and powerful players have a personal incentive to create insulating processes / excuses that prevent them from winding up holding the bag. Everyone in this thread at time of writing is gleefully indulging in wishful thinking about finally being able to hold the team underperformer accountable, but these expectations are unrealistic. Highly productive individuals do not tend to win the blame game because their inclinations are the exact opposite of the winning strategy. The winning strategy is not to be productive, it's to maximize safety margin, which means minimizing responsibility and maximizing barriers to anyone who might ask anything of you. Bureaucracy goes up, not down, and anyone who tries to be productive in this environment gets punished for it.
"Blaming the system" doesn't prevent bureaucracy from accumulating, obviously, but it does prevent it from accumulating in this particular way and for this particular reason.
Eloquently put. Also, likely false.
E.g., soft-realtime audio synthesis applications like Pure Data and Supercollider have had essentially the same audio engines since the 1990s. And users see any missed audio deadline as a negative outcome. More to the point-- for a wide array of audio synthesis/filtering/processing use cases, the devs who maintain these systems consider such negative outcomes as systemic failures which must be fixed by the devs, not the users. I can't think of a more precise example of "blameless culture" than digital artists/musicians who depend on these devs to continue fitting this realtime scheduling peg into the round hole that is the modern (and changing!) multitasking OS.
While there have been some changes over the last 30 years, in no way way have any of these applications seen an explosion in the number of systems they employ to avoid negative outcomes. There's one I can think of in Pure Data, and it's optional.
IMO, there's nothing noteworthy about what I wrote-- it's just one domain in probably many across the history of application development. Yet according to your "law" this is exceptional in the history of systems. That doesn't pass the smell test to me, so I think we need to throw out your ostensible law.
You can't stop personal failures from happening because people are people. You can design processes to minimize or eliminate those personal failures from yielding negative outcomes.
However, management tends to align with reducing the baseline level of skill, presumably because it’s convenient for various business reasons to have everyone be a replaceable “resource”, and to have new people quickly become productive without requiring expensive training.
Ironically, this is one of the factors that drives ever faster job hopping, which reinforces the need for replaceable “resources”, and on it goes.
Did you mean something else?
Extreme B: every collegue is required to read and be able to recite x86, C specifications, postgre manual, and must have IQ 190+.
What is obscure or hard to understand is subjective.
> devs who maintain these systems
What is meant by “system” here? Computer application? Hardware?
Every check has a cost. For some checks, the cost is more than it prevents. Don't add those checks, even after the negative outcome happens.
Some feedback must exist. (calm, obective and possibly private) Eventually it is up to the manager or manager’s manager to be aware what is happening and take action if critically needed.
“ I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent — their place is the General Staff. The next lot are stupid and lazy — they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”
I'm never going back to that kind of culture, it's soul crushing.
When everyone is responsible, nobody is.
Management is correct, if that's the question.
In some very rare bleeding edge cases it is true. Everyone wants to think their company is working on those areas. But here's the truth: your company (for any "you") is actually not.
If you're writing code that is inventing new techniques and pushing the hardware to limits not before imagined (say, like John Carmack) then yes, a single superstar is going to outperform a hundred juniors who simply won't be able to do it, ever.
Asymptotically close to 100% of software jobs are not like that (unfortunately). They're just applying common patterns and libraries to run of the mill product needs. A superstar can outperform maybe 3-4 juniors but that's about it. The jobs isn't that hard and there are only so many hours in a day.
This is made worse today because neither quality nor performance matter anymore (which is depressing, but true). It used to be the software had to work fast enough on normal hardware and if it had bugs it meant shipping new media to all customers which was expensive. So quality and performance mattered. Today companies test everything in production and are continuously pushing updates and performance doesn't matter because you just spin up 50 more instances in AWS if one won't do (let the CFO worry about the AWS bill).
Even in aviation and surgery, improving the technology tends to be more effective than firing the pilot/surgeon or adding more paperwork. If you find there's a button that crashes the plane, fix the button. Don't fire the pilot or add another hour of education on how to not press that button.
If single small mistakes have disastrous consequences, the system is probably too brittle. Approaching it from a blameless angle gives you a better chance of fixing it, as people will cooperate to fix the issue rather than be busy not getting fired
You can still identify and fire/relocate/retrain incompetent people, but that is better done as a continuous thing than as a response to a mishap
The problem with indiscriminate application of "code has to be easy to understand" is that it can be used to make pretty much anything, including most features of your language, off limits. After all, a junior developer may not be familiar with any given feature. Thus, we can establish no reasonable lower bound on allowed complexity using such a guideline.
Conversely, what’s too simple or too difficult is very specific to the person. Somebody who’s coming to a junior developer role from a data science background might have no problem reading 200 lines of SQL. Somebody with FP background might find data transformation pipelines simple to understand but class hierarchies difficult, and so on. So the "easy to understand for anyone" guideline proves less than useful for establishing an upper bound on allowed complexity as well.
Therefore, I find that it’s more useful to talk about a lower and upper bound of what’s required and acceptable. There are things we should reasonably expect a person working on the project to know or learn (such as most language features, basic framework features, how to manipulate data, how to debug etc.) regardless of seniority. On the other hand, we don’t want to have code that’s only understood by one or two people on the team, so perhaps we say that advanced metaprogramming or category theory concepts should be applied very sparingly.
Once that competency band is established, we can work to bring everyone into the band (by providing training and support) rather than trying to stretch the band downwards to suit everyone regardless of experience.
In modern corporate blameless culture, nobody takes the blame. Now this has its own variety of issues, it’s not perfect. But if you look at blame culture, then exactly like OP said, you have to stop building and start protecting. You know who has time for that? The underperforming lazy employee.
There do exist (I would even claim "quite some") jobs/programming tasks where superstars are capable of, but a junir developer will at least need years of training to be able so do/solve them (think, for example, of turning a deep theoretical breakthrough in (constructive) mathematics into a computer program; or think of programming involving deep, obscure x86 firmware trivia), but I agree with your other judgement that such programming tasks are not very common in industry.
The other day, two of our juniors came to see me, they had been stumped by the wrong result of a very complex query for 2 hours. I didn´t event look at the query, just scrolled down the results for 10 seconds and instantly knew exactly what was wrong. This is not because I'm better at SQL than them or a Carmack level talent. This is because I've known the people in the results listing for basically all my life, so I instantly knew who didn´t belong there and very probably why he was being wrongly selected.
Trivial, but 10 seconds vs. 4 man hours is quite the improvement.
Some people want to be holding the bag, if the bag is full of money. All risk no reward won't attract accountable people.
3-10 juniors can make a massive expensive mess of a crud app that costs $x0k a month in amazon spend and barely works, while someone who knows what they're doing could cobble it together on a lamp stack running under their desk for basically nothing.
Knowledge/skills/experience/ can have massive impact.
If screwing up my job meant getting fired with a $5M golden parachute, I would be more than happy to be assigned individual blame!
If, as an accountable leader, you realise that someone ignored the processes and protections, you still have the right to hold them accountable for that. If someone is being lazy, it’s your job to identify that and fire that person.
I won’t pretend it’s easy, and I fully appreciate organisations struggle to make that happen for the reasons you and the article raise.
However, the broader problem I have with blame-focus is that it only applies to Individual Contributor roles. I’ve never heard of middle management being held accountable for any actions whatsoever. And obviously not for less “egregious” misconduct like toxicity, workload, favoritism, etc. Heck middle managers can be completely ignorant of their reports’ actual work and survive for decades.
In my experience at FAANG, the worst of managers will get reassigned to a different team, and maybe have their promotion delayed. Occasionally, I’ve seen VPs get put on nearly a year of gardening leave after major misconduct like sexual harassment - and then they leave and become a C level at a smaller company. And of course, CEOs are fired only for complete mismanagement and company failure - and that’s a very high bar and can take forever until shareholders loudly complain.
Basically, my point is that you can only blame the actual workers at the end of the chain - everyone else along the way is easily shielded and escapes blame.
I didn’t think you were advocating for the situation that occurs. I was merely proposing that “blameless” processes are possibly mis-assigned blame (heh) for company cultures that become centred around ducking accountability.
Yes! Absolutely. It will be faster and more reliable and an order of magnitude (or more) cheaper.
Alas, I'm slowly (grudginly and very slowly) coming to terms accepting that absolutely nobody cares. Companies are happy to pay AWS 100K/mo for that ball of gum that becomes unresponsive four times a day, rather than pay for one expert to build a good system.
It's easy to quantify the harm of any specific failure. It's hard to quantify the harm of incentivizing people who can fly by the seat of their pants (metaphorically and literally) and generally succeed out of an industry and incentivizing button pushers, checklist runners and spreadsheet fillers into an industry. Say nothing of the fact that a bureaucracy built of these people has every incentive not to study it and to find in their own favor if they ever do.
Everyone's time is finite. Would you rather spend a few years to make high five figures slogging through a failure of mid 6s succeeding? It's the same mental calculation but with more 0s in to the left of the decimal.
Great point. This would also apply in the context of DEI hiring initiatives.
Ye. You quickly learn that with Scrum. Pad alot and report the amount of hours estimated at an even pace. Make the burn down line straight.
People doing actual work have a big disadvantage versus those that put their time into manipulating the setup, if the bosses are oblivious. It is really frustrating as a newgrad until you realize how it is done.