I’ve encountered a similar phenomenon with regard to skill as well: people want to ensure that every part of the software system can be understood and operated by the least skilled members of the team (meaning completely inexperienced people).
But similarly to personal responsibility, it’s worth asking what the costs of that approach are, and why it is that we shouldn’t have either baseline expectations of skill or shouldn’t expect that some parts of the software system require higher levels of expertise.
Conversely, one skilled senior can often outperform a hundred juniors using simpler tools, but management just doesn’t see it that way.
However, management tends to align with reducing the baseline level of skill, presumably because it’s convenient for various business reasons to have everyone be a replaceable “resource”, and to have new people quickly become productive without requiring expensive training.
Ironically, this is one of the factors that drives ever faster job hopping, which reinforces the need for replaceable “resources”, and on it goes.
Did you mean something else?
Extreme B: every collegue is required to read and be able to recite x86, C specifications, postgre manual, and must have IQ 190+.
What is obscure or hard to understand is subjective.
Management is correct, if that's the question.
In some very rare bleeding edge cases it is true. Everyone wants to think their company is working on those areas. But here's the truth: your company (for any "you") is actually not.
If you're writing code that is inventing new techniques and pushing the hardware to limits not before imagined (say, like John Carmack) then yes, a single superstar is going to outperform a hundred juniors who simply won't be able to do it, ever.
Asymptotically close to 100% of software jobs are not like that (unfortunately). They're just applying common patterns and libraries to run of the mill product needs. A superstar can outperform maybe 3-4 juniors but that's about it. The jobs isn't that hard and there are only so many hours in a day.
This is made worse today because neither quality nor performance matter anymore (which is depressing, but true). It used to be the software had to work fast enough on normal hardware and if it had bugs it meant shipping new media to all customers which was expensive. So quality and performance mattered. Today companies test everything in production and are continuously pushing updates and performance doesn't matter because you just spin up 50 more instances in AWS if one won't do (let the CFO worry about the AWS bill).
The problem with indiscriminate application of "code has to be easy to understand" is that it can be used to make pretty much anything, including most features of your language, off limits. After all, a junior developer may not be familiar with any given feature. Thus, we can establish no reasonable lower bound on allowed complexity using such a guideline.
Conversely, what’s too simple or too difficult is very specific to the person. Somebody who’s coming to a junior developer role from a data science background might have no problem reading 200 lines of SQL. Somebody with FP background might find data transformation pipelines simple to understand but class hierarchies difficult, and so on. So the "easy to understand for anyone" guideline proves less than useful for establishing an upper bound on allowed complexity as well.
Therefore, I find that it’s more useful to talk about a lower and upper bound of what’s required and acceptable. There are things we should reasonably expect a person working on the project to know or learn (such as most language features, basic framework features, how to manipulate data, how to debug etc.) regardless of seniority. On the other hand, we don’t want to have code that’s only understood by one or two people on the team, so perhaps we say that advanced metaprogramming or category theory concepts should be applied very sparingly.
Once that competency band is established, we can work to bring everyone into the band (by providing training and support) rather than trying to stretch the band downwards to suit everyone regardless of experience.
There do exist (I would even claim "quite some") jobs/programming tasks where superstars are capable of, but a junir developer will at least need years of training to be able so do/solve them (think, for example, of turning a deep theoretical breakthrough in (constructive) mathematics into a computer program; or think of programming involving deep, obscure x86 firmware trivia), but I agree with your other judgement that such programming tasks are not very common in industry.
The other day, two of our juniors came to see me, they had been stumped by the wrong result of a very complex query for 2 hours. I didn´t event look at the query, just scrolled down the results for 10 seconds and instantly knew exactly what was wrong. This is not because I'm better at SQL than them or a Carmack level talent. This is because I've known the people in the results listing for basically all my life, so I instantly knew who didn´t belong there and very probably why he was being wrongly selected.
Trivial, but 10 seconds vs. 4 man hours is quite the improvement.
3-10 juniors can make a massive expensive mess of a crud app that costs $x0k a month in amazon spend and barely works, while someone who knows what they're doing could cobble it together on a lamp stack running under their desk for basically nothing.
Knowledge/skills/experience/ can have massive impact.
Yes! Absolutely. It will be faster and more reliable and an order of magnitude (or more) cheaper.
Alas, I'm slowly (grudginly and very slowly) coming to terms accepting that absolutely nobody cares. Companies are happy to pay AWS 100K/mo for that ball of gum that becomes unresponsive four times a day, rather than pay for one expert to build a good system.
Great point. This would also apply in the context of DEI hiring initiatives.