If you try to do it algorithmically, you arguably won't find a simple expression. It's often glossed over how readability in one axis can drive complexities along another axis, especially when composing code into bite-size readable chunks the actual logic easily gets smeared across many (sometimes dozens) of different functions, making it very hard to figure out what it actually does, even though all the functions check all the boxes for readability, having a single responsibility, etc.
E.g. is userAuthorized(request) is true but why is it true? Well because usernamePresent(request) is true and passwordCorrect(user) is true, both of which also decompose into multiple functions and conditions. It's often a smaller cognitive load to just have all that logic in one place, even if it's not the local optimum of readability it may be the global one because needing to constantly skip between methods or modules to figure out what is happening is also incredibly taxing.
I'm both bothered and intrigued by the industry returning to, what I call, "pile-of-if-statements architecture". It's really easy to think it's simple, and it's really easy to think you understand, and it's really easy to close your assigned Jira tickets; so I understand why people like it.
People get assigned a task, they look around and find a few places they think are related, then add some if-statements to the pile. Then they test; if the tests fail they add a few more if-statements. Eventually they send it to QA; if QA finds a problem, another quick if-statement will solve the problem. It's released to production, and it works for a high enough percentage of cases that the failure cases don't come to your attention. There's approximately 0% chance the code is actually correct. You just add if-statements until you asymptotically approach correctness. If you accidentally leak the personal data of millions of people, you wont be held responsible, and the cognitive load is always low.
But the thing is... I'm not sure there's a better alternative.
You can create a fancy abstraction and use a fancy architecture, but I'm not sure this actually increases the odds of the code being correct.
Especially in corporate environments--you cannot build a beautiful abstraction in most corporate environments because the owners of the business logic do not treat the business logic with enough care.
"A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?
You can't build careful bug-free abstractions in corporate environments.
So, is pile-of-if-statements the best we can do for business software?
I'm not sure if that's anywhere in the rating of quality of business software. Things that matter:
1. How fast can I or someone else change it next time to fulfill the next requirements?
2. How often does it fail?
3. How much money does the code save or generate by existing.
Good architecture can affect 1 and 2 in some circumstances but not every time and most likely not forever at the rate people are starting to produce LLM garbage code. At some point we'll just compile English directly into bytecode and so architecture will matter even less. And obviously #3 matters by far the most.
It's obviously a shame for whoever appreciates the actual art / craft of building software, but that isn't really a thing that matters in business software anyway, at least for the people paying our salaries (or to the users of the software).
You’ll enjoy the Big Ball of Mud paper[1].
Real world systems are prone to decay. You first of all start with a big ball of mud because you’re building a system before you know what you want. Then as parts of the system grow up, you improve the design. Then things change again and the beautiful abstraction breaks down.
Production software is always changing. That’s the beauty of it. Your job is to support this with a mix of domain modeling, good enough abstraction, and constructive destruction. Like a city that grows from a village.
[1] https://laputan.org/mud/mud.html
[2] my recap (but the paper is very approachable, if long) https://swizec.com/blog/big-ball-of-mud-the-worlds-most-popu...
It would just keep adding what it called "heuristics", which were just if statements that tested for a specific condition that arose during the bug. I could write 10 tests for a specific type of bug, and it would happily fix all of them. When I add another one test with the same kind of bug it obviously fails, because the fix that Codex came up with was a bunch of if statements that matched the first 10 tests.
Thank you for giving a perfect example of what I was describing.
The thing is, you actually can make the software work this way, you just have to add enough if-statements to handle all cases--or rather, enough cases that the manager is happy.
Also, many developers are suffering from severe cognitive load that is incurred by technology and tooling tribalism. Every day on HN I see complaints about things like 5 RPS scrapers crippling my web app, error handling, et. al., and all I can think about is how smooth my experience is from my particular ivory tower. We've solved (i.e., completely and permanently) 95% of the problems HN complains about decades ago and you can find a nearly perfect vertical of these solutions with 2-3 vendors right now. Your ten man startup not using Microsoft or Oracle or IBM isn't going to make a single fucking difference to these companies. The only thing you win is a whole universe of new problems that you have to solve from scratch again.
The model of having a circle of ancient greybeards in charge of carefully updating the sacred code to align with the business requirements, while it seems bizarre bordering on something out of WH40K, actually works pretty well and has worked pretty well everywhere I've encountered it.
Attempts to refactor or replace these systems with something more modern has universally been an expensive disaster.
To clarify, when I say "not-that-smart-people", I don't mean "stupid people". You need to be beyond some basic level of intelligence in order to have the capability to overcomplicate a codebase. For lack of a better metric, consider IQ. If your IQ is below 80, you are not going to work day-to-day overcomplicating a codebase. You need to be slightly above average intelligence (not stupid, but also "not-that-smart") to find yourself in that position.
Microsoft had three personas for software engineers that were eventually retired for a much more complex persona framework called people in context (the irony in relation to this article isn’t lost on me).
But those original personas still stick with me and have been incredibly valuable in my career to understand and work effectively with other engineers.
Mort - the pragmatic engineer who cares most about the business outcome. If a “pile of if statements” gets the job done quickly and meets the requirements - Mort became a pejorative term at Microsoft unfortunately. VB developers were often Morts, Access developers were often Morts.
Elvis - the rockstar engineer who cares most about doing something new and exciting. Being the first to use the latest framework or technology. Getting visibility and accolades for innovation. The code might be a little unstable - but move fast and break things right? Elvis also cares a lot about the perceived brilliance of their code - 4 layers of abstraction? That must take a genius to understand and Elvis understands it because they wrote it, now everyone will know they are a genius. For many engineers at Microsoft (especially early in career) the assumption was (and still is largely) that Elvis gets promoted because Elvis gets visibility and is always innovating.
Einstein - the engineer who cares about the algorithm. Einstein wants to write the most performant, the most elegant, the most technically correct code possible. Einstein cares more if they are writing “pythonic” code than if the output actually solves the business problem. Einstein will refactor 200 lines of code to add a single new conditional to keep the codebase consistent. Einsteins love love love functional languages.
None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives - but I can usually pin one of these 3 as the primary within a few days of PRs and a single design review.
I have a hard time separating the why and the what so I document both.
The biggest offender of "documenting the what" is:
x = 4 // assign 4 to x
Yeah, don't do that. Don't mix a lot of comments into the code. It makes it ugly to read, and the context switching between code and comments is hard.Instead do something like:
// I'm going to do
// a thing. The code
// does the thing.
// We need to do the
// thing, because the
// business needs a
// widget and stuff.
setup();
t = setupThing();
t.useThing(42);
t.theWidget(need=true);
t.alsoOtherStuff();
etc();
etc();
Keep the code and comments separate, but stating the what is better than no comments at all, and it does help reduce cognitive load.Sales contracts with weird conditions and odd packaging and contingencies? Pile of if statements.
The other great model for business logic is a spreadsheet, which is well modeled by SQL which is a superset of spreadsheet functionality.
So piles of if’s and SQL. Yeah.
Elegant functional or OOP models are usually too rigid unless they are scaffolding to make piles of conditions and relational queries easier to run.
You're right that the business logic is gonna be messy, and that's because nobody really cares, and they can offload the responsibility to developers, or anyone punching it in.
On the other hand, separating "good code" and "bad code" can have horrible outcomes too.
One "solution" I saw in a fintech I worked at, was putting the logic in the hands of business people itself, in the form of a decision engine.
Basically it forced the business itself to maintain its own ball of mud. It was impossible to test, impossible to understand and even impossible simulate. Eventually software operators were hired, basically junior-level developers using a graphical interface for writing the code.
It was rewritten a couple times, always with the same outcome of everything getting messy after two or three years.
If you find yourself sprinkling ifs everywhere, try to lift them up, they’ll congregate at the same place eventually, so all of your variability is implemented and documented at a single place, no need to abstract anything.
It’s very useful to model your inputs and outputs precisely. Postpone figuring out unified data types as long as possible and make your programming language nice to use with that decision.
Hierarchies of classes, patterns etc are a last resort for when you’re actually sure you know what’s going on.
I’d go further and say you don’t need functions or files as long as your programming is easy to manage. The only reason why you’d need separate files is if your vcs is crippled or if you’re very sure that these datetime handlers need to be reused everywhere consistently.
Modern fullstack programming is filled with models, middleware, Controllers , views , … as if anyone needs all of that separation up front.
Things make more sense when the data structure lives in a world where most, if not all illegal atates become unrepresentable. But given that we often end un building APIs in representations with really weak type systems, doing that becomes impossible.
If you make a change at the wrong place, you add more complexity than if you put the change in the right place. You often see the same thing with junior developers, in that case due to a limited mental model of the code. You give them a task that from a senior developer would result in a 2 line diff and they come back changing 45 lines.
Sometimes last mile software turns into these abstractions but often not.
I’ve worked with very smart devs that try to build these abstractions too early, and once they encounter reality you just have a more confusing version of if statement soup.
In particular these variables need to be extremely well named, otherwise people reading the code will still need to remember what exactly is abstracted if the wording doesn't exactly fit their vision. E.g.
> isSecure = condition4 && !condition5
More often than not the real proper name would be "shouldBeSecureBecauseWeAlsoCheckedCondition3Before"
To a point, avoiding the abstraction and putting a comment instead can have better readability. The author's "smart" code could as well be
```
if (val > someConstant // is valid
&& (condition2 || condition3) // is allowed
&& (condition4 && !condition5) // is secure
) {
...
}
```
Let's take a recipe:
Ingredients:
large bowl
2 eggs
200 grams sugar
500 grams flour
1/2 tsp soda
Steps:
Crack the eggs into a bowl. Add sugar and whisk. Sift the flower. Add the soda.
When following the instruction, you have to always refer back to the ingredients list and search for the quantity, which massively burdens you with "cognitive load". However, if you inline things: Crack 2 eggs into a large bowl. Add 200g sugar and whisk. Sift 500g of flower. Add 1/2 tsp soda.
Much easier to follow!Project Manager: "Can we ship an order to multiple addresses?"
Grey Beard: "No. We'd have to change thousands of random if-statements spread throughout the code."
Project Manager: "How long do you think that would take?"
Grey Beard: "2 years or more."
Project Manager: "Okay, we will break you down--err, I mean, we'll need to break the task down. I'll schedule long meetings until you relent and commit to a shorter time estimate."
Grey Beard eventually relents and gives a shorter time estimate for the project, and then leaves the company for another job that pays more half-way through the project.
Certainly, there are such people who simply don't care.
However I would also say that corporations categorically create an environment where you are unable to care - consider how short software engineer tenures are! Any even somewhat stable business will likely have had 3+ generations of owner by the time you get to them. Owner 1 is the guy who wrote 80% of the code in the early days, fast and loose, and got the company to make payroll. Owner 2 was the lead of a team set up to own that service plus 8 others. Owner 3 was a lead of a sub-team that split off from that team and owns that service plus 1 other related service.
Each of these people will have different styles - owner 1 hated polymorphism and everything is component-based, owner 2 wrapped all existing logic into a state machine, owner 3 realized both were leaky abstractions and difficult to work with, so they tried to bring some semblance of a sustainable path forward to the system, but were busy with feature work. And owner 3 did not get any Handoff from person 2 because person 2 ragequit the second enough of their equity vested. And now there's you. You started about 9 months ago and know some of the jargon and where some bodies are buried. You're accountable for some amount of business impact, and generally can't just go rewrite stuff. You also have 6 other people on call for this service with you who have varying levels of familiarity with the current code. You have 2.25 years left. Good luck.
Meanwhile I've seen codebases owned by the same 2 people for over 10 years. It's night and day.
IMO a lot of (software) engineering wisdom and best practices fails in the face of business requirements and logic. In hard engineering you can push back a lot harder because it's more permanent and lives are more often on the line, but with software, it's harder to do so.
I truly believe the constraints of fast moving business and inane, non sensical requests for short term gains (to keep your product going) make it nearly impossible to do proper software engineering, and actually require these if-else nests to work properly. So much so that I think we should distinguish between software engineering and product engineering.
I suspect that we agree with each other and you misread my earlier comment.
I think comments in general are underrated. You don't need to annotate every line like a freshman programming assignment, but on the other hand most supposed self-documenting code just isn't.
I am convinced this behaviour and the one you described are due to optimising for swe benchmarks that reward 1-shotting fixes without regard to quality. Writing code like this makes complete sense in that context.
I think I'm not smart enough for it. I can't really take anything new away from it, mainly just a message of "we're smart people, and trust us when we say smart things are bad. All the smart sounding stuff you learned about how to program from smart sounding people like us? Lol, that's all wrong now."
Okay, I get the cognitive load is bad, so what's the solution?
"Just do simple dumb stuff, duh." Oh, right... Useful.
The problem is never just the code, or the architecture, or the business, or the cognitive load. It's the mismatch of those things against the people expected to work with them.
Walk into a team full of not-simple engineers, and tell them all what they've been doing is wrong, and they need to just write simple code, some of them will fail, some will walk out, and you'll be no closer to a solution.
I wish I knew of the tech world before 20 years ago, where technical roles were long and stable enough for teams to build their own understanding of a suitable level of complexity. Without that, churn means we all have to aim for the lowest common denominator.
https://news.ycombinator.com/item?id=42489645 (721 comments)
They are acting rationally given companies don't seem to value long term expertise.
Now, if it was a worker- owned cooperative, that would be a different thing.
I could be adding a new feature six months later, or debugging a customer reported issue a week later. Especially in the latter case, where the pressure is greater and available time more constrained, I love that earlier/younger me was thoughtful enough to take the extra time to make things clear.
That this might help others is lagniappe.
There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.
In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.
On the other end of the spectrum you hear sentences starting with: "It would help me to understand this more easily, if ...".
Guess, what happens over time in these teams?
The issue with this stance is, it’s not a zero sum game. There’s no arriving to a point where there isn’t a cognitive load on the task you’re doing. There will always be some sort of load. Pushing things off so that you reduce your load is how social security databases end up on S3.
Confusion comes from complexity. Not a high cognitive load. You can have a high load and still know how it all works. I would better word this as Cognitive load increases stress as you have more things to wrestle about in your head. Doesn’t add or remove confusion (unless that’s the kind of person you are), it just adds or removes complexity.
An example of a highly complex thing with little to no cognitive load due to conditioning, driving an automobile. A not-complex thing that imparts a huge cognitive load, golf.
I've got a function the gist of which is
if (!cond())
return val;
do {
// logic
} while (cond());
return val;
This looks like it could be simplified as while (cond())
// logic
}
return val;
But if you do you lose out on 20% of performance due to branch mispredictions, and this is a very hot function. It looks like a mistake, like the two are equivalent, but they are actually not. So it gets a comment that explains what's happening.When the problem itself is technical or can be generalised then abstractions can eliminate the need for 1000s of if-statement developers but if the domain itself is messy and poorly specified then the only ways abstractions (and tooling) can help is to bake in flexibility, because contradiction might be a feature not a bug...
Ah, the chat gpt style of comments.
> Instead do something like:
The only negative is that there is a chance the comment becomes stale and not in sync with the code. Other coders should be careful to update the comment if they touch the code.
There is also a discussion between the author of Clean Code and APOSD:
Instead, at least one implementer needs to get hands dirty on what the application space really is. Very dirty. So dirty that they actually start to really know and care about what the users actually experience every day.
Or, more realistically for most companies, we insist on separate silos, "business logic" comes to mean "stupid stuff we don't really care about", and we screw around with if statements. (Or, whatever, we get hip to monads and screw around with those. That's way cooler.)
I once tried to explain to a product owner that we should be careful to document what assumptions are being made in the code, and make sure the company was okay committing to those assumptions. Things like "a single order ships to a single address" are early assumptions that can get baked into the system and can be really hard to change later, so the company should take care and make sure the assumptions the programmers are baking into the system are assumptions the company is willing to commit to.
Anyway, I tried to explain all this to the product owner, and their response was "don't assume anything". Brillant decisions like that are why they earned the big bucks.
- Difficult to measure and therefore a hard or impossible to empirically study. (a bad scientific theory)
- Its application to education and learning theory which where a lot of other techniques are more proven.
- The idea that it's a primary mechanism of human learning, which has had a-lot of research showing otherwise.
Though those points seem valid, this article does not concern itself deeply with this concept. The word "mental strain" or "limited short term memory" could have been inserted in place of "cognitive load", and the points raised would be valid. In effect the article argues we should minimize the amount of things that need to be taken into consideration at any given point when reading (or writing) code. This claim is quite reasonable irrespective of the scientific bases of CTL which it takes its wording from.
So i don't think your criticism is entirely relevant to this article, but raising it does help inform others about issues with the used wording if they happen to want to learn more.
if(val <= someconstant)
return; //not valid
if(!(condition2 || condition3))
return; //not allowed
...
The author mentions this technique as well.I find it particularly useful in controller API functions because it makes the code a lot more auditable (any time I see the same set of conditions repeating a lot, I consider whether they are a good candidate for middleware).
I try to explain this to newer developers and they just don't get it, or give me eyerolls.
Maybe sending them this article will help.
The kind of psycho-bullshit that we should stay away from, and wouldn't happen if we respected each other. Coming from Microsoft is not surprising though.
Even simple examples like this get complicated in the real world.
If your code ever has the possibility of changing, your early wins by having no abstraction are quickly paid for, with interest, as you immediately find yourself refactoring to a higher abstraction in order to reason about higher-order concepts.
In this case, the abstraction is the simplicity, for the same reason that when I submit this comment, I don't have to include a dictionary or a definition of every single word I use. There is a reason that experienced programmers reach for abstractions from the beginning, experience has taught them the benefits of doing so.
The mark of an expert is knowing the appropriate level of abstraction for each task, and when to apply specific abstractions. This is also why abstractions can sometimes feel clumsy and indirect to less experienced engineers.
The auth example may not be. You may need to do validatePassword(user) for passwordCorrect(user) to be true, which then forces you to open up a hole in the abstraction that is userAuthorized(request) and peak inside. userAuthorized() has leaked out its logic, it has failed as an abstraction. Its a box with 3 walls and no roof that blocks visibility to important logic rather than hides away the complexity.
A fact that you need to remember about code might use up more or less short-term memory in a human brain compared to a digit or a number, so don't be ashamed if your number is 3 instead of 4.
I also think that my working memory was better when I was 20ish, now at 41 I already feel less fits in and I forget it faster.
Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.
What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.
I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.
Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.
If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)
One would imagine by now we would have some incredibly readable logical language to use with the SQL on that context...
But instead we have people complaining that SQL is too foreign and insisting we beat it down until it becomes OOP.
To be fair, creating that language is really hard. But then, everybody seems to be focusing on destroying things more, not on constructing a good ecosystem.
I think the sentiment that we should use simpler languages comes abuse of powerful features. Once we've meta-programmed the entire program logic with a 12 layer deep type tree or inheritance chain... we may realize we abused the tool in a way a simple language would have stopped.
But at the same time...checking a errno() after a function call just because the language lack result type or exception handling, is clearly too simple. A minor increase in language complexity would have made the code much clearer.
i like what others would call complexity, i always have, and have from very very early on been mindful of that, i think to a fault since i no longer trust my intuition
is it good to try to turn wizards into brick layers? is there no other option?
I'm firmly in the "DI||GTFO" camp, so I don't meant to advocate for the Factory pattern but saying that only abstractions that you like are ok starts to generate PR email threads
I spent time at Microsoft as well, and one of the things I noticed was folks who spent time in different disciplines (e.g. dev, test, pgm) seemed to be especially great at tailoring these qualities to their needs. If you're working on optimizing a compiler, you probably need a bit more Einstein and Mort than Elvis. If you're working on a game engine you may need a different combination.
The quantities of each (or whether these are the correct archetypes) is certainly debatable, but understanding that you need all of them in different proportions over time is important, IMHO.
To be fair, the HTTP status line allows for arbitrary informational text, so something like “HTTP/1.1 401 JWT token expired” would be perfectly allowable.
I myself find it to be a term that's effectively used as a thought-terminating cliche, sometimes as a way to defend a critic's preferred coding style and organization.
However, if new information that needs to be learned is in the game there is also germane cognitive load [0]. It is a nice theory, however, practically there is unfortunately no easy way to separate them and look at them totally independently.
[0] https://mcdreeamiemusings.com/blog/2019/10/15/the-good-the-b...
shipToAddress(getShippingAddress(getStreet()),calculateShipping(getShippingAddress(getStreet())))
because as soon as one needs to update the mechanism for getting the shipping address, congratulations, you're updating it everywhere -- or not, nothing matters in this timelineThe instructions are simple to remember, the ingredient quantities are not. Once i have read the recipe, i have a clear idea what i need to do. But i may need to look up the quantities. Looking for those in a block of text is a waste of time and error prone. Having a recipe formatted like the 2nd example is only useful for a someone inexperienced with baking.
Its also difficult to buy or bring out the ingredients if they're hidden in text. Or know if i can bake something based on what i have or not.
If you've baked an really old recipe from a book, you may find instructions like "Mix the ingredients, put in a pan, bake until brown on high heat", with a list of quantities and not even a picture to know what it is you're baking. An experienced baker will know the right order and tools, and how their oven behaves, so it's not written out at all.
If i look for recipes online, seeing the instructions written like you have makes me reluctant to trust the author, and also makes me annoyed when trying to follow or plan out the process.
- Mort wants to climb the business ladder.
- Elvis wants earned social status.
- Einstein wants legacy with unique contributions.
- Amanda just wants group cohesion and minimizing future unpredictability.
I see the ideal as a combination of Mort and Einstein that want to keep it simple enough that it can be delivered (less abstraction, distilled requirements) while ensuring the code is sufficiently correct (not necessarily "elegant" mind you) that maintenance and support won't be a total nightmare.
IMO, seek out Morts and give them long term ownership of the project so they get a little Einstein-y when they realize they need to support that "pile of if statements".
As an aside, I'm finding coding agents to be a bit too much Mort at times (YOLO), when I'd prefer they were more Einstein. I'd rather be the Mort myself to keep it on track.
Something was bugging me after an interview with a potential hire, and now I can articulate that they were too much Einstein and not enough Mort for the role.
I think the article could have used a different term, or made a more clear declaration of what they specifically meant with the term to resolve this issue. Though i don't think it was done intentionally to deceive since the article makes no mention of the formal literature or theory of "cognitive load" to back its arguments.
https://s3.amazonaws.com/systemsandpapers/papers/bigballofmu...
If there is no inherent complexity, a Mort will come up with the simplest solution. If it's a complex problem needing trade-offs the Mort will come up with the fastest and most business centric solution.
Or would you see that Amanda refactoring a whole system to keep it simple above all whatever the deadlines and stakes ?
I generally find that comments in code should explain why the code is doing non-obvious things. “This gets memoized because it’s actually important to maintain referential identity for reason X.”
Anyway, their sibling comment told me what I wanted to know, so in that way I'm wasting more of my time contributing to this
While the castle of cards of unfathomable complexity is praised for visibly hard work and celebrated with promotions.
Modularity.
Each component in your system should be a (relatively) simple composition of other (smaller) components, in such a way that each component can be understood as a black box, and is interchangable with any other implementation or the same thing.
Mort == maker Elvis ==? hacker Einstein == poet
My favourite frameworks are written by people smart enough to know they're not smart enough to build the eternal perfect abstraction layers and include 'escape hatches' (like getting direct references to html elements in a web UI framework etc) in their approach so you're not screwed when it turns out they didn't have perfect future-sight.
Sometimes teams are quite stuck in their ways because they don’t have the capacity or desire to explore anything new.
For example, an Elvis would probably introduce containers which would eliminate a class of dependency and runtime environment related issues, alongside allowing CI to become easier and simpler, even though previously using SCP and Jenkins and deploying things into Tomcat mostly worked. Suddenly even the front end components can be containers, as can be testing and development databases, everyone can easily have the correct version locally and so on.
An unchecked Elvis will eventually introduce Kubernetes in the small shop to possibly messy results, though.
Once you commit a particular concept to long-term memory and it's not "leaky" (you have to think through the internal behavior/implementation details), then now you have more tools and ways to describe a collective bunch of lower-level concepts.
That's the same feeling programmers used to more powerful languages have to write less powerful languages — instead of using 1 concept to describe what you want, now you have to use multiple things. It's only easier if you've not grokked the concept.
Then, still unsure, I go back and read the end of the sentence. "...into a large bowl."
Another large bowl? Wasn't there only one bowl? Was there another small bowl? Were the other eggs supposed to be in the small bowl? Is there another recipe listed separately where I'm supposed to crack the other egg (that might exist) into a small bowl? A single egg would only need a small bowl. Yes, there's probably another part of the recipe that is a sauce or filling or something that needs you to crack one egg into a small bowl. Let me reread everything on this webpage until I find something referred to that might be a necessary part of finishing this recipe, find the instructions to make that, then come back to this. Also, I have to reevaluate whether I even have enough eggs and bowls to make this recipe. I've read the entire page again, and I can't find what I'm missing. Maybe I'll google another, similar recipe, and they'll be a sauce or side that is mandatory for this dish that everybody already knows about but me, and it will be obvious. Ok, this is something similar, and it's served with some sort of a lemon sauce? I don't think I like lemon sauces. I'm not going to make this, I'm not even sure if I can. I hate trying to find recipes on the internet.
Project Manager: "Can we ship an order to multiple addresses? We need it in 2 weeks and Grey Beard didn't want to do it"
Eager Beaver: "Sure"
if (order && items.length > 1 && ...) {
try {
const shipmentInformation = callNewModule(order, items, ...)
return shipmentInformation
} catch (err) {
// don't fail don't know if error is handled elsewhere
logger.error(err)
}
} else {
// old code by Grey Beard
}
It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster, it prevents them from learning the quirks of the codebase and eventually they'll lose the ability to write code on their own.
It's been well documented that LLMs collapse after a certain complexity level.
But since nobody has mentioned the alternative yet, the framework used by anyone in any scientific capacity is the Big Five: https://en.wikipedia.org/wiki/Big_Five_personality_traits
The link between programming and conscientiousness seems fairly straightforward. To fully translate Mort/Elvis/Einstein into some kind of OCEAN vector would take a little more effort.
It's writing something for me, not for itself.
My current org has a terrible case of not-invented-here syndrome, and it's so easy to pitch new projects that solve something that there's already an existing tool for, or building a new feature. We would love to spend time just working within our existing systems and fixing crap abstractions we made under the deadline-gun, but we're not "allowed" to.
> [...] humans like to build the new, and nobody likes to maintain the old
I think this is certainly true at organizational scale, but most of the people I've met are change-resistant overall.
I think the personas have some validity but I don't agree with the primary focus/mode.
For example, I tend to be a mort because what gets me up in the morning is solving problems for the enterprise and seeing that system in action and providing benefit. Bigger and more complex problems are more fun to solve than simpler ones.
MFC may have been a steaming pile of doodoo, but at least the tools for developing on the OS were generally free and had decent documentation
Of course Eager Beaver didn't learn from this experience because they left the company a few months ago thinking their code was AWESOME and bragging about this one nicely scalable service they made for shipping to multiple addesses.
Meanwhile Grey Beard is the one putting out the fires, knowing that any attempt to tell Project Manager "finding and preventing situations like this was the reason why I told my estimate back then" would only be received with skepticism.
"Cognitive Load" is a buzzword which is abstract.
Cognitive Load is just one factor of projects, and not the main one.
Focus on solving problems, not Cognitive Load, or other abstract concepts.
Use the simple, direct, and effective method to solve problems.
Cognitive Load is relative, it is a high Cognitive Load for one person, but low cognitive load for another person for the same thing.
Case in point: Forth. It generally has a heavy cognitive load. However, Forth also enables a radical kind of simplicity. You need to be able to handle the load to access it.
The mind can train to a high cognitive load. It's a nice "muscle" to train.
Should we care about cognitive load? Absolutely. It's a finite budget. But I also think that there are legitimate reasons to accept a high cognitive load for a piece of code.
One might ask "what if you need to onboard mediocre developers into your project?". Hum, yeah, sure. In that case, this article is correct. But being forced to onboard mediocre developers highlights an organizational problem.
It's not that I don't believe you about the performance impact, as I have observed the same with e.g. Rust in some cases, but I don't think it has a lot to do with the compiler judging what's more likely, but rather more or less "random" optimization differences/bugs. At least in my case, the ordering had nothing to do with likelihood, or even had a reverse correlation.
I think in your example a compiler may or may not realize the code is semantically equivalent and all bets are off about what's going to happen optimization-wise.
I mean, in the end it doesn't matter for the commenting issue, as you are realistically not going to fix the compiler to have slightly more readable code.
For one, many studies of identical twins raised in separate households show they have the same personality type at a much higher rate than chance.
Two, there are incredibly strong correlations in the data. In different surveys of 100k+ people, the highest earning type has twice the salary of the lowest type. This is basically impossible by chance.
The letters (like ENTJ) correlate highly to the variables of Big 5, the personality system used by scientists. Its just that it's bucketed into 16 categories vs being 5 sliding scales.
Scientific studies are looking for variables that can be tracked over time reliably, so Big 5 is a better measure for that.
But for personal or organizational use, the category approach is a feature, not a bug. It is much more help as a mental toolkit than just getting a personality score on each of the 5 categories.
Your counter example assumes the people managing the code base are incompetent.
Wouldn't the rewrite fail for the exact same reason if the company only employs incompetent tech people?
In this case, the data is bimodal, depending on the chosen input, two likely outcomes exists. Either no looping is needed, or much looping is needed. This seemingly confuses the branch predictor when it's the same branch dealing with both scenarios.
The author makes valid points but they are vacuous and do not provide concrete alternatives.
Many engineering articles disappoint me in this way, I get hyped by all the “don’t dos”, but the “do dos” never come.
Good, trusted unit tests are the difference between encapsulation reducing or increasing/complicating cognitive load. (And similar but between components for integration tests).
That being said, there will be rare times that the issue is due to something that is only an edge case due to an implementation detail several units deep, and so sometimes you do still need the full picture, but at least it lets you save doing that until you're stumped, which IMO is well worth it if the code is overall well-designed and tested.
The fact is, despite all the process and pipelines and rituals we've invented to guide how software is made, the best thing leadership can do is to communicate incremental, unambiguous requirements and provide time and space for your engineers to solve the problem. If you don't do that, none of the other meetings and systems and processes and tools will matter.
Later, Xcode (or Project Builder) became pretty much free with the first release of MacOS X. You could buy a Mac and install all the tools to develop software. Very much in the spirit of NeXT. I am sure something similar happened for Microsoft around the same time.
And now of course all the tools both native from vendors + a large selection of additional third party tools are basiclly free for all major platforms.
(Disregarding things like 'app store fees' or 'developer accounts' which exists for both Apple and Microsoft but are not 100% required to build stuff.)
I literally relaxed in my body when I read this. It was like a deep sigh of cool relief in my soul.
I don't like to generalize.
You are lucky then. I've definitely worked with super smart engineers who chose incredibly complicated solutions over more simpler and pragmatic solutions. As a result the code was generally hard to maintain and specially difficult to understand.
It is a real thing. And it generally happens with "the smart ones" because people who don't know how to make things complicated generally stick with simpler solutions. In my experience.
Also effort, there are smart people who couldn't be bothered to reduce extraneous load for other people, because they already took the effort to understand it, but they don't have the theory-of-mind to understand that it's not easy for others, or can't be bothered to do so.
> I have only made this letter longer because I have not had the time to make it shorter. - Blaise Pascal
Good rule of thumb I find is, did the new change make it harder or easier to reason about the change / topic?
If we go back to the concept of cognitive load, it's fine cognitive load goes up if the solution is necessarily complex. It's the extraneous bit that we should work to minimize, reduce if possible.
Elvis is not a persona - it is an inside baseball argument to management. It suffered a form of Goodhart’s law … it is a useful tool so people phrase their arguments in that form to win a biz fight and then the tool degrades.
Alan Cooper, who created VB advocated personas. When used well they are great.
The most important insight is your own PoV may be flawed. The way a scientist provides value via software is different than how a firmware developer provides value.
https://www.amazon.com/Inmates-Are-Running-Asylum/dp/0672316...
It completely removes the stress of doing things repeatedly. I recently had to do something I hadn't done in 2 years. Yep, the checklist/doc on it was 95% correct, but it was no problem fixing the 5%.
Elvis: A famous rock star
Enstein: A famous physicist
Amanda: ???
Mort, Elvis, Enstein are referencing things I've heard of before. What is Amanda referencing? is there some famous person named Amanda? Is it slang I'm unaware of?
I think anyone that thinks mudball is OK because business is messy has never seen true mudball code.
I've had to walk out of potential work because after looking at what they had I simply had to tell them I cannot help you, you need a team and probably at minimum a year to make any meaningful progress. That is what mudballs leads to. What this paper describes is competent work that is pushed too quickly for cleaning rough edges but has some sort of structure.
I've seen mudballs that required 6-12 months just to do discovery of all the pieces and parts. Hundreds of different version of things no central source control, different deployment techniques depending on the person that coded it even within the same project.
Read the fine print.
The way I've been thinking about it is about organization. Organize code like we should organize our house. If you have a collection of pens, I guess you shouldn't leave them scattered everywhere and in your closet, and with your cutlery, and in the bathroom :) You should set up somewhere to keep your pens, and other utensils in a kind of neat way. You don't need to spend months setting up a super-pen-organizer that has a specially sculpted nook for your $0.50 pen that you might lose or break next week. But you make it neat enough, according to a number of factors like how likely it is to last, how stable is your setup, how frequently it is used, and so on. Organizing has several advantages: it makes it easier to find pens, shows you a breath of options quickly, keeps other places in your house tidier and so less cognitively messy as well. And it has downsides, like you need to devote a lot of time and effort, you might lose flexibility if you're too strict like maybe you've been labeling stuff in the kitchen, or doing sketches in your living room, and you need a few pens there.
I don't like the point of view that messiness (and say cognitive load) is always bad. Messiness has real advantages sometimes! It gives you freedom to be more flexible and dynamic. I think children know this when living in a strict "super-tidy" parent house :) (they'd barely get the chance to play if everything needs to be perfectly organized all the time)
I believe in real life almost every solution and problem is strongly multifactorial. It's dangerous to think a single factor, say 'cognitive load', 'don't repeat yourself', 'lesser lines of code', and so on is going to be the single important factor you should consider. Projects have time constraints, cost, need for performance; expressing programs, the study of algorithms and abstractions itself is a very rich field. But those single factors help us improve a little on one significant facet of your craft if you're mindful about it.
Another factor I think is very important as well (and maybe underestimated) is beauty. Beauty for me has two senses: one in an intuitive sense that things are 'just right' (which capture a lot of things implicitly). A second and important one I think is that working and programming, when possible, should be nice, why not. The experience of coding should be fun, feel good in various ways, etc. when possible (obviously this competes with other demands...). When I make procedural art projects, I try to make the code at least a little artistic as well as the result, I think it contributes to the result as well.
[1] a few small projects, procedural art -- and perhaps a game coming soon :)
Certainly giving me some pause for thought, in my own work.
My current approach is creating something like a Gem on Gemini with custom instructions and the updated source code of the project as context.
I just discuss what I want, and it gives me the code to do it, then I write by hand, ask for clarifications and suggest changes until I feel like the current approach is actually a good one. So not really “vibe-coding”, though I guess a large number of software developers who care about keeping the project sane must be doing this.
It works very well for me.
"so, there's 3 boxes. no more, no less. why? i have a gut feeling. axis? on a case by case basis. am i willing to put my money where my mouth is? heallnaw!"
> All too often, we end up creating lots of shallow modules, following some vague "a module should be responsible for one, and only one, thing" principle.
This is what I'm talking about: this writing is too smart for me, because I can't take any simple answers from it like "modularity" without feeling another part of the article contradicts it with other smart sounding ways of saying don't listen to smart stuff.
Hopefully someone can learn from this before they spin a complex web that becomes a huge effort to untangle.
Why so? It is always in front of you, it reminds you what you need to do and does not get out of sight, which helps keep the focus.
When you bury it or set it somewhere else it is very easy to bury it.
I appreciate both for different reasons.
Am I missing a reference? If not, may I suggest “Ada”?
https://en.wikipedia.org/wiki/Ada_Lovelace
Or even better, “Grace”. Seems to fit your description better.
I have been most of my career working with C++. You all may know C++ can be as complex as you want and even more clever.
Unless I really need it, and this is very few times, I always ask myself: will this code be easy to understand for others? And I avoid the clever way.
And let's face it, 95% of software isn't exactly novel.
> AI is unable to solve their problems.
You are contradicting yourself. AI works worse than humans in places where cognitive load is required, and so it can't cross the boundary of cognitive load. If say it becomes better at managing cognitive load in the future, then in any case it doesn't matter as you can ask it to reduce the cognitive load in the code and it would.
Getting to a better understanding of "cognitive load" does seem useful. Some things are "easier" to understand than others. Could things that are less efficient to understand be formulated in a way that is more efficient?
I have a notion that "cognitive load" is related to the human's ability to gain and maintain attention to mentally ingesting a solution (along with the problem the solution putatively solves). Interesting reads for this include McGilchrist's Master and His Emissary, and Carolyn Dicey Jennings' "I attend, therefore I am," [0], who was interviewed on the Rutt podcast [1].
0. https://aeon.co/essays/what-is-the-self-if-not-that-which-pa...
1. https://jimruttshow.blubrry.net/the-jim-rutt-show-transcript...
I don't see the problem. Okay, so we need to support multiple addresses for orders. We can add a relationship table between the Orders and ShippingAddresses tables, fix the parts of the API that need it so that it still works for all existing code like before using the updated data model, then publish a v2 of the api with updated endpoints that support creating orders with multiple addresses, adding shipping addresses, whatever you need.
Now whoever is dependent on your system can update their software to use the v2 endpoints when they're ready for it. If you've been foolish enough to let other applications connect to your DB directly then those guys are going to have a bad time, might want to fix that problem first if those apps are critical. Or you could try to coordinate the fix across all of them and deploy them together with the db update.
The problems occur when people don't do things properly, we have solutions for these problems. It's just that people love taking shortcuts and that leads to a terrible system full of workarounds rather than abstractions. Abstractions are malleable, you can change them to suit your needs. Use the abstractions that work for you, change them if they don't work any more. Design the code in such a way that changing them isn't a gargantuan task.
(To anticipate the usual reaction when I point that out: if you're going to sputter with rage and say that compilers are deterministic while AI isn't, well... save it for a future argument with someone who can be convinced that it matters.)
If the abstraction doesn't fit a new problem, it should be easy to reassemble the components in a different way, or use an existing abstraction and replace some components with something that fits this one problem.
The developers shouldn't be forced to use the abstractions, they should voluntarily use them because it makes it easier for them.
Also the actual solution is proper team leadership/management. If you have morts, make sure that code quality requirements are a PART of the requirements their code must pass, and they’ll instead deliver decent work slightly slower. Got an elvis? Give more boundaries. Got Einsteins? Redefine the subtasks so they can’t refactor everything and give deadlines both in terms of time but also pragmatism.
Either way, I don’t love this approach, as it removes the complexity from the human condition, complexity which is most important to keep in mind.
And you can improve everything with a system. A team of morts forced into a framework where testers/qa/code review find and make them fix the problems along the way before the product is shipped is an incredibly powerful thing to behold.
Scientists, mathematicians, and software engineers are all really doing similar things: they want to understand something, be it a physical system, an abstract mathematical object, or a computer program. Then, they use some sort of language to describe that understanding, be it casual speech, formal mathematical rigor, scientific jargon -- or even code.
In fact, thinking about it, the code specifying a program is just a human-readable description (or "theory", perhaps) of the behavior of that program, precise and rigorous enough that a computer can convert the understanding embodied in that code into that actual behavior. But, crucially, it's human readable: the reason we don't program in machine code is to maximize our and other people's understanding of what exactly the program (or system) does.
From this perspective, when we write code, articles, etc., we should be highly focused on whether our intended audience would even understand what we are writing (at least, in the way that we, the writer, seem to). Thinking about cognitive load seems to be good, because it recognizes this ultimate objective. On the other hand, principles like DRY -- at least when divorced from their original context -- don't seem to implicitly recognize this goal, which is why they can seem unsatisfactory (to me at least). Why shouldn't I repeat myself? Sometimes it is better to repeat myself!? When should I repeat myself??
If you want to see an example of a fabulous mathematician expressing the same ideas in his field (with much better understanding and clarity than I could ever hope to achieve), I highly recommend Bill Thurston's article "On proof and progress in mathematics" <https://arxiv.org/abs/math/9404236>.
They fail on reality. A lot of those "best" practices assume, that someone understands the problem and knows what needs to be built. But that's never true. Building software is always an evolutionary process, it needs to change until it's right.
Try to build an side project, that doesn't accept any external requirements, just your ideas. You will see that even your own ideas and requirements shift over time, a year (or two) later your original assumptions won't be correct anymore.
And eventhough Onenote is MS product and Evernote was the original that OneNote copied off of, OneNote is a better engineered piece of software (I have tons of notes and a few of them very large documents), and Onenote rarely has problems.
Don’t follow trends and seek the “next best way to hack your productivity”. Most of those things are snake oil and a waste of time. Just use whatever you have available and build a process yourself. That’s what most people have done that are successful in applying this. They just use the tool they are comfortable with, and don’t over engineer for the sake of it
I’ve seen and created some pretty bad stuff. Point is not that it’s okay, but that that’s the job: managing, extending, and fixing the mess.
Yes a perfect codebase would be great, but the code is not perfect and there’s a job to do. You’re not gonna rebuild all of San Francisco just to upgrade the plumbing on one street.
Much of engineering is about building systems to keep the mess manageable, the errors contained, etc. And you have to do that while keeping the system running.
I think if I were to make three strawmen like this I would instead talk about them as maximizing utility, maintainability, and effectiveness. Utility because the "most business value" option doesn't always make the software more useful to people. (And I will tend to prioritize making the software better over making it better for the business.) Maintainability because the thing that solves the use case today might cause serious issues that makes the code not fit for purpose some time in the future. Effectiveness because the basket of if statements might be perfect in terms of solving the business problem as stated, but it might be dramatically slower or subtly incorrect relative to some other algorithm.
Mort is described as someone who prioritizes present business value with no regard to maintainability or usefulness.
Elvis is described as someone who prioritizes shiny things, he's totally a pejorative.
Einstein is described as someone who just wants fancy algorithms with no regard for maintainability or fitness to the task at hand. Unlike Elvis I think this one has some value, but I think it's a bit more interesting to talk about someone who is looking at the business value and putting in the extra effort to make the perfectly correct/performant/maintainable solution for the use case, rather than going with the easiest thing that works. It's still possible to overdo, but I think it makes the archetype more useful to steelman the perspective. Amanda sounds a bit more like this, but I think she might work better without the other three but with some better archetypes.
Yeah, if you go through this article and replace most of the places where it mentions "cognitive load" with "complexity," it still makes sense.
Yeah, this isn't a criticism of the article - In fact, there are important difference, like having more of a focus on what the dev is experiencing handling the complications of the system - But for those really interested in its concept, may want to learn about complexity too, as there is a lot of great info on this.
The Programmer's Brain: What every programmer needs to know about cognition. By Felienne Hermans
For those asking why author doesn't come up with their own new rules that can then be followed, this would just be trading a problem for the same problem. Absentmindedly following rules. Writing accessible code, past a few basic guidelines, becomes tacit knowledge. If you write and read code, you'll learn to love some and hate some. You'll also develop a feel for heavy handedness. Author said it best:
> It's not imagined, it's there and we can feel it.
We can feel it. Yes, having to make decisions while coding is an uncomfortable freedom. It requires you to be present. But you can get used to it if you try.
Basically, you should aim to minimise complexity in software design, but importantly, complexity is defined as "how difficult is it to make changes to it". "How difficult" is largely determined by the amount of cognitive load necessary to understand it.
Maybe I should make it more visible.
Reducing cognitive load comes from the code that you don't have to read. Boundaries between components with strong guarantees let you reason about a large amount of code without ever reading it. Making a change (which the article uses as a benchmark) is done in terms of these clear APIs instead of with all the degrees of freedom available in the codebase.
If you are using small crisp API boundaries to break up the system, "smart developer quirks" don't really matter very much. They are visible in the volume, but not in the surface area.
In particular, when the shit hits the fan, your max cognitive load tanks. Something people who grumble at the amount of foolproofing I prefer often only discover in a crisis. Because they’re used to looking at something the way they look at it while sipping their second coffee of the day. Not when the servers are down and customers are calling angry.
You’ll note that we only see how the control room at NASA functions in movies and TV when there’s a massive crisis going on, or intrigue. Because the rest of the time it’s so fucking boring nobody would watch it.
But the trick I found is that if you can extract a function for only the part of the code you’re optimizing/improving, and then make your change in a single commit, two things happen. One, it’s off the code path, so out of site, out of mind. Two, people are more forgiving of code changes they don’t like but can roll back by reverting a single commit. That breaks down a bit with PRs, since they tend to think of the code as a single commit. But the crisp boundaries still matter a lot.
I even tried calling the bookstore on his campus and they said try back at the beginning of a semester, they didn’t have any copies.
My local book store could not source me a copy, and neither IIRC could Powell’s.
Not quite. Human mind has evolved to interpret the sensory data collected by senses, and cause necessary action. Some of that interpretation uses memory to correlate the perceived data with the memory data. That's pretty much it.
Overloading the human memory with tons of data which is not related to the context in which the person lives, can cause negative effects. I suspect it can also cause faster aging. New experiences, new information is like scales on a tree trunk. As you accumulate more of it, you age more.
When you’re dealing with perennial plants, there’s only so much control you actually have, and there’s a list of things you know you have to do with them but you cannot do them all at once. There is what you need to do now, what you need to do next year, and a theory of what you’ll do over the next five years. And two years into any five year plan, the five year plan has completely changed. You’re hedging your bets.
Traditional Formal English and French gardens try to “master” the plants. Force them to behave to an exacting standard. It’s only possible with both a high degree of skill and a vast pool of labor. They aren’t really about nature, or food. They’re displays of opulence. They are conspicuous consumption. They are keeping up with the Joneses. Some people love that about them. More practical people see it as pretentious bullshit.
I think we all know a few companies that make a bad idea work by sheer force of will and overwhelming resources.
I have always been a C+ student of rote memorization at best. Almost enough to be good at trivia, but not enough to do well in coursework. I am always trying to build a Theory of a System from practically word one, which is the fifth stage of learning, where rote is the first.
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
Balancing a cup on a tray isn't too hard. The skill comes in when you can balance 10 cups, and a tray on top of them, and then ten more cups, and another tray, and a vase on that... each step isn't difficult, but maintain the structure is difficult. It's like that, but with ideas.
The comment you’re responding to mentioned pulling code into a function. As an example, if there’s a clever algorithm or technique that optimizes a particular calculation, it’s fine to write code more for the machine to be fast than the human to read as long as it’s tidy in a function that a dev using a debugger can just step over or out of.
There are code improvements that improve legibility, correctness, and performance. There are ones that improve two of those three qualities. You can use those pretty much anywhere and people will pick a reason to like the code you modified in such a manner.
But if you put “clever” code high in the call graph, get ready for grumbling from all corners. If it happens to be near where legitimate bugs tend to live, get ready for a lot of it.
I would probably also add that this advice goes hand in glove with Functional Core, Imperative Shell. The surface area for unintended consequences in pure code is much tighter, so people won’t have to scan as much code to narrow down the source of a strange interaction because there aren’t interactions to be strange. I don’t need to look at code that has a single responsibility that is orthogonal to the problem I’m researching. Until or unless I become desperate because nothing else has worked so far.
The people writing the complex code generally seem to think they're smart.
That was me, once. And I was smart, but I was also applying my smarts very, very poorly.
> The word entered English from the Louisiana French adapting a Quechua word brought in to New Orleans by the Spanish Creoles.
... I see.
This is unhelpful even if the design is a complete mess.
I think it's a pretty good compromise. I have tried in the past not to duplicate code at all, and it often ends up more pain than gain. Allow copy/paste if code is needed in two different places, but refactor if needed in three or more, is a pretty good rule of thumb.
To quote OP: "None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives"
To me, it's the things that are specifically intended to behave the same should be kept DRY.
The issue is when the evolution is random and rife with special cases and rules that cannot be generalized... the unknown unknowns of reality, as you say.
Then, you just gotta patch with if elses.
Deliberately going earlier makes sense if experience teaches you there will be 3+ of this eventually, but the point where I'm going to pick "Decline" and write that you need to fix this first is when I see you've repeated something 4-5 times, that's too many, we have machines to do repetition for us, have the machine do it.
An EnableEditor function? OK, meaningful name. EnablePublisher? Hmm, yes I understand the name scheme but I get a bad feeling. EnableCoAuthor? Approved with a stern note to reconsider, are we really never adding more of these, is there really some reason you can't factor this out? EnableAuditor. No. Stop, this function is named Enable and it takes a Role, do not copy-paste and change the names.
If you're copying and pasting something, there probably isn't a good reason for that. (The best common reason I can think of is "the language / framework demands so much boilerplate to reuse this little bit of code that it's a net loss" — which is still a bad feeling.)
If you rewrite something without noticing that you're doing so, something has definitely gone wrong.
If a client's requirements change to the point where you can't accommodate them in the nicely refactored function (or to the point where doing so would create an abomination) — then you can make the separate, similar looking version.
To me this is the upside of the microservices concept. Of course, true microservices take it way too far. But once you tell two teams they can only talk to each other with APIs and make them use tooling that properly defines what those are (schemas etc) .... all of a sudden they are forced to draw those boundaries well and then stick to them. And they get really conservative about changing them and think hard about what the definitions should be up front. It's sort of perversely sticking technical friction in at the points where you want there to be natural conservatism around change.
You can't win architecture arguments.
I like the article but the people who need it won't understand it and the people who don't need it already know this. As we say, it's not a technical problem, it's always a people and culture problem. Architecture just follows people and culture. If you have Rob Pike and Google you'll get Go. You can't read some book and make Go. (whether you like it or not is a different question).
Over time I have come to prefer having two near copies that are each more concretely expressive of their task than a more abstract version that caters to both.
Life is all about learning, adapting and changing. Great leaders see the potential growth in people and are up for having hard conversations about how they can improve.
Even if people do have these personality traits as life long attributes, that doesn't define them or prevent them from learning aspects of the others over time.
Add to the fact that they're the professor of many software engineering courses and you start to see why so many new grads follow SOLID so dogmatically, which leads to codebases quickly decaying.
Even the most obvious of functions like sin() and cos() may in some circumstances warrant a specialized implementation. Sure, for most stuff you should not have 10 copies of those all over the place. But sometimes you might.
DRY is a bad rule. The more appropriate rule is avoid duplicating code when not doing so results something better. I.e. judgement always trumps rules.
I would embrace copying and pasting for functionality that I want to be identical in two places right now, but I’m not sure ought to be identical in the future.
Of course, the disadvantage is the exponential growth. 20 ifs means a million cases (usually less because the conditions aren't independent, but still).
Then I have a flat list of all possible cases, and I can reconstruct a minimal if tree if I really want to (or just keep it as a list of cases - much easier to understand that way, even if less efficient).
Sometimes things are only the same temporarily and shouldn't be brought together.
This is a good article but the main thing that bugs me about it is that the author completely disregards germane overhead.
Germane overhead is about recognition and practice and, at scale, it matters just as much.
Intrinsic and extraneous overhead is about the information itself and how it’s presented.
Germane overhead is about the receiver so in order to make code accessibility a first-class citizen you can’t ignore it.
Ultimately, context, industries and teams vary so greatly that it doesn't make sense to quantify it.
What I've settled on instead is aiming for a balance between "mess" and "beauty" in my design. The hardest thing for me personally to grasp was that businesses are indeterministic whereas software is not, thus requirements always shifts and fitting this into the rigidity of computer systems is _difficult_.
These days, I only attempt to refactor when I start to feel the pain when I'm about to change the code.. and even then, I perform the bare minimum to clean up the code. Eventually multiple refactoring shapes a new pattern which can be pulled into an abstraction.
If you choose to not copy paste the code you better be damn sure the two places that use it are relying on the same concept, not just superficially similar code thats yet to diverge
How real is this use case? Unless you switch projects really often, this is like a week per two years.
Perhaps we should focus on solving problems that are hard by nature, not by experience of a developer or other external factors.
At my current projects we drop all code comments except for some really tricky logic and very high level docs.
..we do?
Who created short stories as used in Tiktok/IG?
The first touch screen phone?
First social media app?
Was Google the first?
I mean I almost see the opposite of what you're saying..
There's no one rule. It takes experience and taste to make good guesses, and you'll often be wrong even so.
I've seen numerous places trying to hire someone to fix a 5-10 year mudball that has reached a point where progress is no longer possible without breaking something else which breaks something else and so on.
There is an endgame to the mudball and it does end in complete and total development stopping and systems that are constantly going offline and take weeks to get restarted. Most of the time the place will say: "Oh we've already had several consultants tell us the same thing" The same thing being the situation is hopeless and they are facing years of simply untangling the mess they made.
Usually the mudball is held together by a chain of increasingly shorter senior positions that keep jumping the sinking ship faster and faster. Finally they can no longer convince anyone sane to take on the ticking time bomb they have created and they turn to consultants.
Also my advice is often you should bring back person X that was at least familiar with the system at whatever salary they require. I am inevitably told that that person will literally not even take calls or emails from the company any more, every time. Thats how bad a real world mudball is.
That's true. One doesn't change his mindset just after reading. Even after some mentorship the results are far from satisfying. Engineers can completely agree with you on the topic, only to go and do just the opposite.
It seems like the hardest thing to do is to build a feedback loop - "what decisions I made in past -> what it led to". Usually that loop takes a few years to complete, and most people forget that their architecture decisions led to a disaster. Or they just disassociate themselves.
1) As I mentioned, it has a lot of statistically significant correlations, including to all the variables of the Big 5. Example: Surveys show that % of the overall population that is each type (like INFJ) is very consistent across time and populations.
2) Beyond that, youre right, there are a lot of personality systems with pros and cons. But Myers-Briggs has by far the must supporting materials, tools, ease of use, and so on. I think its the quickest to make useful to the average person.
3) I've found it really helpful as a lens for self analysis in my own life.
Example: you have a config defined as Java/Go classes/structures. You want to check that the config file has the correct syntax. Non-DRY strategy is to describe its structure in an XSD schema (ok, ok JSON schema) and then validate the config. So you end up with two sources of truth: the schema and Java/Go classes, they can drift apart and cause problems.
The DRY way is to generate the classes/structures that define the config from that schema.
Doesn't need to be. The API tells me what it does, hopefully there is a test suite to assure me. I can add to the test suite if I have a question, or want to lock in a behavior.
When it's a third party library, everyone assumes it's supposed to work? but when it's a system in the same repository, all of a sudden all bets are off, and you need to understand it fully to get anything done?
> It's good when things just work and you can use an API, but most of the time you have to dig whatever is under the rug.
If most changes require simultaneously changing multiple areas of the code, such that changing one system implies changing every other system with roughly equal probability, then it's not well designed.
I don't know what else to tell you. It's going to be hard to iterate or maintain that system, in part because it requires a high cognitive load. None of the code in such a system provides any cognitive leverage. You can't rule out, or infer behavior of a large amount of code, by reading a small amount.
If such a system is important, then part of the strategy has to be to improve the architecture.
Because of some quirk of the way my brain works, giant functions with thousands of lines of code doesn't really present a high cognitive load for me, while lots of smaller functions do. My "working memory" is very low (so I have trouble seeing the "big picture" while hopping from function to function), while "looking through a ton of text" comes relatively easily to me.
I have coworkers who tend to use functional programming, and even though it's been years now and I technically understand it, it always presents a ton of friction for me, where I have to stop and spend a while figuring out exactly what the code is saying (and "mentally translating" it into a form that makes more sense to me). I don't think this is necessarily because their code inherently presents a higher cognitive load - I think it's easier for them to mentally process it, while my brain has an easier time with looking at a lot of lines of code, provided the logic within is very simple.
It also depends how big the consequences to failure/bugs are. Sometimes bugs just aren't a huge deal, so it's a worthwhile trade-off to make development easier in change for potentially increasing the chance of them appearing.
Now there’s a new requirement that only applies to Europe and nowhere else and it’s super easy and straight forward to change the infrastructure.
I don’t see how it was a poor choice to literally copy and paste configs that result in hundreds of thousands of lines of yaml and I have 25 yoe.
I remember using TCL in the 90s for my own projects as an embeddable command language. The main selling point was that it was a relatively widely understood scripting language with an easily embeddable off-the-shelf open source code base, perhaps one of the first of its kind (ignoring lisps.) Of course the limitations soon became clear. Only a few years later I had high hopes that Python would become a successor, but it went in a different direction and became significantly more difficult to embed in other applications than was TCL -- it just wasn't a primary use case for the core Python project. The modern TCL-equivalent is Lua, definitely a step up from TCL, but I think if EDA tools used Lua there would be plenty of hand-wringing too.
Just guessing, but I imagine that at the time TCL was adopted within EDA tools there were few alternatives. And once TCL was established it was going to be very hard to replace. Even if you ignore inertia at the EDA vendors, I can't imagine hardware engineers (or anyone with a job to do) wanting to switch languages every two to five years like some developers seem happy to do. It's a hard sell all around.
I reckon the best you can do is blame the vendors for (a) not choosing a more fit-for purpose language at the outset, which probably means Scheme, or inventing their own, (b) or not ripping the bandaid off at some point and switching to a more fit-for-purpose language. Blaming (b) is tough though, even today selecting an embedded application language is vexed: you want something that has good affordances as a language, is widely used and documented, easily embedded, and long-term stable. Almost everything I can think of fails the long term stability test (Python, JavaScript, even Lua which does not maintain backward compatibility between releases).
Downsides with your approach is:
1. Now whenever you want to change something both in Europe and (assuming) USA you have to do it in 2 places. If the change is the same for both, in my system, you could just update the default/shared config. If the change is different for both it's equally easy, but faster, since the overrides are smaller files.
2. It's not clear what the difference is between Europe and USA if there is 1 line different amongst thousands. If there are more differences in the future, it becomes increasingly difficult to tell the difference easily.
3. If in the future you also need to add Africa, you just compounded the problems of 1. and 2.
Sure, we could take the Foo, Bar, and Baz tables that share 80-90% of common logic and have them inherit from a common, shared, abstract component. We've discussed it in the past. Maybe it's the better solution, maybe not. But it would mean that instead of maintaining 3 component files and 3 test file, which are very similar, and when we need to change something it is often a copy-paste job, instead we'd have to maintain 2 additional files for the shared component, and when that has to change, it would require more work as we then have to add more to the other 3 files.
Such setups can often cause a cascade of tests that need updated and PRs with dozens of files changed.
Also, there are many parts of our project where things could be done much better if we were making them from scratch. But, 6 years of changing requirements and new features and this is what we have - and at this point, I'm not sure that having a shared component would actually make things easier unless we rewrite a huge amount of the codebase, for which there is no business reason.
https://www.youtube.com/watch?v=8bZh5LMaSmE
Worth watching in its entirety, but the quote is from ~13:59 in that video.
Unless it's a rule prohibiting complexity by removing technologies. Here's a set of rules I have in my head.
1. No multithreading. (See Mozilla's "You must be this high" sign)
2. No visitor pattern. (See grug oriented development)
3. No observer pattern. (See django when signals need to run in a particular order)
4. No custom DSL's. (I need to add a new operator, damnit, and I can't parse your badly written LALR(1) schema).
5. No XML. (Fight me, I have battle scars.)
That's not true. There's plenty of beginner programmers who will benefit from this.
But I'm not experienced enough to tell, whether it is my inexperience that causes the difficulty, or it is indeed that the unnecessary complexity tha causes it.
Somewhat off-topic, that's one usual failure mode of "DRY" code. Code is de-duplicated at a visual level rather than in terms of relevant semantics, so that changes which should only affect one path either affect both or are very complicated to reason about because of the unnecessary coupling.
This one is my particular pet-peeve. But I often think that the reason is because I suck. I'm going to read "grug".
I also hate one-liner functions.
From an operational perspective it’s much more important to ensure the code is clear and readable during an incident.
Overrides are like inheritance. They are themselves complex and add unnecessary cognitive load.
Composition is better for the common pieces that never change across regions. Think of an import statement of a common package into both the Europe and North America folders.
I easily see the one line diff among hundreds of thousands using… diff.
Regarding Africa, we’ve established 1 is a feature and 2 is a non issue, so I’d copy it again.
This approach scales both as the team scales and as the infrastructure scales. Teammates can read and comprehend much more easily than hierarchies of overrides, and changes are naturally scoped to pieces of the whole.
Sometimes things need to be complex -- well that's okay. The real trick is to not put complexity into places it doesn't belong.
I am afraid, the cat is out the bag, and there is no turning back with GenAI and coding – juniors have got a taste of GenAI assisted coding and will persevere. The best we can do it educate them on how to use it correctly and responsibly.
The approach I have taken involves small group huddles where we talk to each other as equals, and where I emphasise the importance of understanding the problem space, the importance of the depth and breadth of knowledge, i.e. going across the problem domain – as opposed to focusing on a narrow part of it. I do not discourage the junior engineers from using GenAI, but I stress the liability factor and the cost: «if you use GenAI to write code, and the code falls apart in production, you will have a hard time supporting it if you do not understand the generated code, so choose your options wisely». I also highlight the importance of simplicity over complexity of the design and implementation, and that simplicity is hard, although it is something we should strive for as an aspiration and as a delivery target.
I reflect on and adjust the approach based on new observations, feedback loop (commits) and other indirect signs and metrics – this area is still new, and the GenAI assisted coding framework is still fledging.
I think Big Orgs need to develop younger promising talent by letting them build small green fields projects. Essentially fostering startups inside the organisation proper. Let them build and learn from mistakes (while providing the necessary knowledge; you can actually learn most of this from books, but experience is the ultimate teacher). Otherwise you end up with 5 year experienced people who cannot design themselves out of a paper bag.
Watch ‘Simple made Easy’ by Rich Hickey; a classic from our industry. The battle against complexity is ever ongoing. https://youtu.be/SxdOUGdseq4?feature=shared
* Our coding standards require that functions have a fairly low cyclomatic complexity. The goal is to ensure that we never have a a function which is really hard to understand.
* We also require a properly descriptive header comment for each function and one of the main emphases in our code reviews is to evaluate the legibility and sensibility of each function signature very carefully. My thinking is the comment sort of describes "developer's intent" whereas the naming of everything in the signature should give you a strong indication of what the function really does.
Now is this going to buy you good architecture for free, of course not.
But what it does seem to do is keep the cognitive load manageable, pretty much all of the time these rules are followed. Understanding a particular bit of the codebase means reading one simple function, and perhaps 1-2 that are related to it.
Granted we are building websites and web applications which are at most medium fancy, not solving NASA problems, but I can say from working with certain parts of the codebase before and after these standards, it's like night and day.
One "sin" this set of rules encourages is that when the logic is unavoidably complex, people are forced to write a function which calls several other functions that are not used anywhere else; it's basically do_thing_a(); do_thing_b(); do_thing_c();. I actually find this to be great because it's easy to notice and tells us what parts of the code are sufficiently complex or awkward as to merit more careful review. Plus, I don't really care that people will say "that's not the right purpose for functions," the reality is that with proper signatures it reads like an easy "cliffs notes" in fairly plain English of exactly what's about to happen, making the code even easier to understand.
Sometimes you do have the domain expertise to make the judgment call.
A recent example that comes to mind is a payment calculation. You can go ahead and tie that up in a nice reusable function from the get go - if you've ever dealt with a bug where payment calculations appeared different in some places and it somehow made it in front of a customer you're well aware of how painful this can be. For some things having a single source of truth outweighs any negatives associated with refactoring.
It explains so much of what has been bothering me about what I work on at work, and now I understand why and some of what to do about it.
The originating example for an Amanda is someone who used her brain to recognize that the existing code was clumsily modeling a state machine and clarified the code by reframing it in terms of well-known vocabulary. It's technically an abstraction but because every dev is taught in advance how they work it's see-through and reduces cognitive load even when you must peel back the abstraction to make changes.
Junior programmers too often make the mistake of thinking the code they write is intended for consumption by the machine.
Coding is an exercise in communication. Either to your future self, or some other schmuck down the line who inherits your work.
When I practice the craft, I want to make sure years down the line when I inevitably need to crack the code back open again, I'll understand what's going on. When you solve a problem, create a framework, build a system... there's context you construct in your head as you work the project or needle out the shape of the solution. Strive to clearly convey intent (with a minimum of cognitive load), and where things get more complicated, make it as painless as possible for the next person to take the context that was in your head and reconstruct it in their own. Taking the patterns in your brain and recreating them in someone else's brain is in fact the essence of communication. In practice, this could mean including meaningful inline comments or accompanying documentation (eg. approach summary, drawings, flowcharts, state change diagrams, etc). Whatever means you have to efficiently achieve that aim. If it helps, think of yourself as a teacher trying to teach a student how your invention works.
More coders are needed, than to who these are “simple”, I understand. But, if you have problems with these, I would definitely try to pivot to something else, like managerial positions. Especially with AI on us. Of course, if you are fine to be an “organic robot”, then it’s fine, but you’ll never really get why this profession is awesome. You’ll never have the leverage.
All work represents a social entity (person/persons) and when you're the one calling out issues, pushing for proactive measures, and pushing against bad practices/complexity you're typically taking issue with _someone's_ work along the way. This is often seen as a "squeaky wheel" or "noisy Nancy" - or hell, outright antisocial. Most of the time it is not in your best interest to be this person.
The people who keep their nose down + mouth shut, those who prioritize marketing their work, and the sycophants are the ones who have longevity and upward trajectory - this is corporate America work culture.
There will be people looking at pure Green and pure Blue and ask for an Emerald color to get RGBE instead, but that's not how the RGB framework works. And I can't get rid of the feeling that Amanda is that Emerald color people are clamoring for.
I also kinda get why Microsoft got rid of the system for something more abstract.
I'd only lean towards intermediate variables if a) there's lots of smaller conditionals being aggregated up into bigger conditionals which makes line-by-line comments insufficient or b) I'm reusing the same conditional a lot (this is mostly to draw the reader's attention to the fact that the condition is being re-used).
"A single page on Doordash can make upward of 1000 gRPC calls (see the interview). For many engineers, upward of a thousand network calls nicely illustrate the chaos and inefficiency unleashed by microservices. Engineers implicitly diff 1000+ gRPC calls with the orders of magnitude fewer calls made by a system designed by an architect looking at the problem afresh today. A 1000+ gRPC calls also seem like a perfect recipe for blowing up latency. There are more items in the debit column. Microservices can also increase the costs of monitoring, debugging, and deployment (and hence cause greater downtime and worse performance)."
Otherwise I mostly agree.
These individual functions are easier to reason about since they have specific use cases, you don't have to remember which combinations of conditions happen together while reading the code, they simplify control flow (i.e. you don't have to hack around carrying data from one if block to the next), and it uses no "abstraction" (interfaces) just simple functions.
It's obviously a balance, you'll still have some if statements, but getting rid of mutually exclusive conditions is basically a guaranteed improvement.
Finding flow while coding is a juggling act to keep things in the Goldilocks zone: not too hard, not too easy.
This is tricky on an individual level and even trickier for a team / project.
Coding is communicating how to solve a problem to yourself, your team, stakeholders and lastly the computer.
The Empathic Programmer?
In order to get a sense of what code is harder to understand you will do better to read code and have others read your code. A good takeaway is to keep this in mind (amongst many other factors) and to understand code needs to be maintained, extended, adapted etc.
The ideas are still useful. The danger is blindly applying rules. As long as the reader knows not to apply any of the suggestions if they don't understand why and have relevant experience ;)
https://github.com/fzipp/gocyclo
> * We also require a properly descriptive header comment for each function and one of the main emphases in our code reviews is to evaluate the legibility and sensibility of each function signature very carefully. My thinking is the comment sort of describes "developer's intent" whereas the naming of everything in the signature should give you a strong indication of what the function really does.
https://github.com/mgechev/revive
> Now is this going to buy you good architecture for free, of course not.
It's not architecture to tell people to comment on their functions.
Also FTR, people confuse cyclomatic complexity for automagically making code confusing to the weirdest example I have ever had to deal with - a team had unilaterally decided that the 'else' keyword could never be used in code.
Elvis and Einstein joined powers to create 14 new javascript package managers over a handful of years while Mort tore his hair out.
What made your team decide on that rule? Could your team decide to drop it since it hinders improving the design of your code?
Not weird at all:
https://medium.com/@matryer/line-of-sight-in-code-186dd7cdea...
Not everything is complicated, most functions don't need comments, why require it? Just fix complexity when it arises. Don't mandate that you can't make any complexity.
https://eslint.org/docs/latest/rules/no-else-return#rule-det...
But never using it is crazy.
Maybe one day we will abstract it away like the goto keyword (goto is a keyword in Go, and other languages still, but I have only seen it used in the wild once or twice in my 7 or 8 years of writing Go)
Goto is still used in almost every language, but it's abstracted away, hidden in loops, and conditionals (which Djikstra said was a perfectly acceptable use of goto), presumably to discourage its direct use to jump to arbitrary points in the code
So in software development there may be an argument to always structure projects the same way. Standards are good — even when they're bad! because one of their main benefit is familiarity.
Edit: sometimes the comments are the best of all evils, and you should use them to explain the constraints that led to this code - they just shouldn't be mandatory.
This is really a key takeaway here: Always keep your audience in mind. When programming, you have two audiences: the machine executing the code, and fellow programmers maintaining the code. Both are important, but the latter is often neglected and is what the article is about. Optimize for your human audience. What will make it easier for the next person to understand this? Do that.
Like public speaking or writing an article. A great talk or a article happen when the speaker/author knew exactly how the audience would perceive them.
While the logic behind it sounds reasonable, REST does the exact opposite with the same goal: simplicity, easy to learn, i.e. reduce mental load. I know there are other reasons for REST/SOAP/Graphql, etc. Still makes mental load a somewhat subjective matter to me.
What is this bug in software people's brains that keeps thinking "I can come up with a perfect idea that is never wrong" ? Can a psychologist explain this to me please?
Like, scientists know this is dumb. The only way something can be perceived as right, scientifically, is if lots of people independently test an idea, over and over and over and over again, and get the same result. And even then, they just say it's true so far.
But software people over here like "If I spend 15 minutes thinking about an idea, I can come up with a fundamental principle of everything that is always true forever." And sadly the whole "fundamental principle" is based in ignorance. Somebody heard an interesting-sounding term, never actually learned what it meant, but decided to make up their own meaning for it, and find anything else in their sphere (software) that backs up their theory.
If they'd at least quoted any of the academic study and research about cognitive load over the past 35 years, maybe I might be blowing this out of proportion? But nope. This is literally just a clickbait rant, based on vibes, backed up by quotes from blogs. The author doesn't seem to understand cognitive load at all, and their descriptions of what it is, and what you should do in relation to it, are all wrong. The article doesn't even mention all three types of cognitive load. And one of the latest papers on the subject (Orru G., Longo L. (2019)) basically came to the conclusion that 1) the whole thing is very complex, and 2) all the previous research might be bunk or at least need brand new measurement methods, so... why is anyone taking this all as if it's fact?
But I'm not really bothered by the ignorance. It's the ego that kills me. The idea that these random people who know nothing about a subject are rushing to debate this, as if this idea, or these people's contributions, have merit, just because they think they're really smart.
The reason REST largely succeeded (or, rather, what I like to refer to as "REST-lite") is because people who wanted to build stuff quickly on the web realized "Hey, I don't need all this protocol complexity (see: SOAP), I can just make simple, human-readable API calls over the same HTTP layer my browser uses anyway".
There is other stuff in "official REST" that I think has some value, like the noun/verb structure of API routes, but shoehorning API-level error codes into HTTP status codes has been a disaster IMO. Every time I've seen this done I've seen the same issues come up again and again and new developers constantly have to rediscover solutions and problem spots. Does "404" mean the API endpoint doesn't exist, or that particular resource doesn't exist? How do I map my very specific API error to rather generic HTTP status codes? Does a status code error mean a problem with the networking or the application?
by doing something better here would actually not bring any value, because it would mean that developers would have to remember that this one thing is done differently.
that's trap where I would say many mid Devs fall in, they learned how do things better, but increase congnitive load for the rest of developers just by doing things differently.
Replacing this standard with custom strings in the response body is terrible advice. Even if we all could have wished that http status codes should have been human readable strings rather than numbers. Augmenting the standard response with additional custom information is still something you can and should do as cherry on the top, or if you have many conditions falling under the same standard code. Like, don’t shoehorn something custom into 418 I’m a teapot just because it happened to be unused.
(I sometimes "ask" questions for something it took me a few back and forths through code to get so they'd think about how it could be made clearer)
Unfortunately, most people focus on explaining their frame of mind (insecurity?) instead of thinking how can they be the best "teacher".
I feel this in my soul. But I'm starting to understand this and accept it. Acceptance seem to lessen my frustration on discussing with architects that seemingly always take the opposite stance to me. There is no right or wrong, just always different trade offs depending on what rule or constraint you are prioritizing in your mind.
For instance, the article itself suggests to use early/premature returns, while they are sometimes compared to "goto", making the control flow less obvious/predictable (as paxcoder mentioned here). Intermediate variables, just as small functions, can easily complicate reading of the code (in the example from the article, one would have to look up what "isSecure" means, while "(condition4 && !condition5)" would have shown it at once, and an "is secure" comment could be used to assist skimming). As for HTTP codes, those are standardized and not dependent on the content, unlike custom JSON codes: most developers working with HTTP would recognize those without additional documentation. And it goes on and on: people view different things as good practices and being simpler, depending (at least in part) on their backgrounds. If one considers simplicity, perhaps it is best to also consider it as subjective, taking into account to whom it is supposed to look simple. I think sometimes we try to view "simple" as something more objective than "easy", but unless it is actually measured with something like Kolmogorov complexity, the objectivity does not seem to be there.
Why do they insist on A over B? What trade offs were considered? Why are these trade offs less threatening than other trade offs? What previous failures or difficulties led them to put such weight on this problem over others?
Sometimes it's just ego or stubbornness or routine¹. That can and should be dismissed IMO. Even if through these misguided reasons they choose the "right" architecture, even if the outcome turns out good, that way of working is toxic and bad for any long term project.
More often, there are good, solid reasons behind choices, though. Backed with data or science even. Things I didn't know, or see different, or have data and scientific papers for that "prove" the exact opposite. But it doesn't matter that much, as long as we all understand what we are prioritizing, what the trade offs are and how we mitigate the risks of those trade offs, it's fine.
¹ The worst, IMO, is the "we've always done it like this" trench. An ego can be softened or taken off the team. But unwillingness to learn and change, instilled in team culture is an almost guaranteed recipe for disaster
In tradeoff engineering, maintainability over the long term is one of the many variables to optimize, and finite resources need to be alloted to it.
When I read this article I get the feeling that it's more likely that he is obsessing over maintainability over the long term while his app has a user count of zero. This malady usually comes from the perspective of being a user, one finds that the experience of writing some code is a "bad experience" so they strive to improve it or learn how to build a good "coder experience", the right answer is to understand that one is stepping into the shoes of the plumber, and it will be shitty, just gotta roll up your sleeves.
Don't get me wrong, there's a lot of wisdom here, but to the extent that there is, it's super derivative and well established, it's just the kind of stuff that a developer learns on their first years of software by surfing the web and learning about DRY, KISS and other folklore of software. To some extent this stuff is useful, but there's diminishing returns and at some point you have to throw shit and focus on the product instead of obsessing over the code.
It's an important distinction in terms of priorities. I personally think the experience of the user is orders of magnitude more important than engineer cognitive load.