In a way experience is knowing what things will cause friction and having solutions to overcome them. Friction reduction.
As a Dutch ex-Navy officer, we just called this "friction" as everyone had read Von Clausewitz during officer training and was familiar with the nuances of the term. Militaries overwhelmingly address this problem by increasing redundancy, so that there are as few single points of failures as possible. It is very rare to encounter a role that can only be filled by a single person, a well designed military organization will always have a plan for replacing any single individual should they accidentally die.
But it is not "oh, we have solved friction". It trades the "combat" friction of having to wait for orders (possibly compounded by the weather, comms jamming, your courrier stepping on a mine, etc.) and turns it into "strategy" friction of having subordinates taking initiatives they shouldn't have taken. But I'd argue (like modern armies do) the tradeoff is worth it, and the strategic level has more resources to overcome their friction than combat troops engaged in a fight. But it wasn't always the case (cue the famous tale of a Roman consul, Manlius [0], who executed his own son for disobeying, even though he was victorious ).
[0] https://www.rijksmuseum.nl/en/collection/SK-A-613 https://www.heritage-history.com/index.php?c=read&author=haa...
Viewing friction as the principle of increasing entropy helps.
You can think of a graph with nodes being the states of various systems including humans, software services, database, etc., and edges being dependencies between them. Reducing the states directly reduces the entropy. Reducing the dependencies reduce the rate of increase of the entropy in the system.
This directly leads to various software principles like separation of concern, parse not validate, denormalisation, state reduction, narrow interface deep implementation, KISS, readability etc. All of these reduce friction.
As such I find the "Addressing friction" section in the article lacking, but it does highlight some useful points.
"I can make a brigadier general in five minutes, but it is not easy to replace a hundred and ten horses" -- attr. Lincoln (exact words vary by source)
It's noticeable how few computer wargames simulate any of this, instead allowing for frictionless high speed micromanagement.
[0] https://quoteinvestigator.com/2011/11/21/graveyards-full/?am...
If the military used Microsoft, America would be in ruins.
In military Command and Staff Training (e.g. for training large HQs), the solution to this is that the trainees don't use the simulations themselves. Instead they issue commands using emulated C2 systems to role players ('lower controllers') pretending to be subordinate forces, who then execute the orders using the sim, and then report back what has happened, as far as they can tell. This generates quite a lot of useful friction. Another group of role players ('higher controllers') represent the HQ superior to the trainees HQ and in turn issue them with orders. The role players and opposing force are also following direction from exercise control (EXCON) and can easily be used to dial up the pressure on the trainees. There is a small industry (e.g. [0]) supporting the exercise management systems that keep track of the various 'injects' that are fed to the trainees via the role players, or by simulated ISR assets, etc.
Friction is simulated in many computer games, the problem is that taking it too far would make them unenjoyable or too niche. Remember they are games first and simulations second (with exceptions; precisely the ones that are too niche).
Friction in computer games is simulated in multiple ways:
- The most obvious one: randomized results. Your units do not do a set damage nor do they always succeed, but instead the PRNG plays a role (e.g. most combat rolls in most wargames, but also whether a missile launched within parameters tracks or fails to in DCS).
- Fog of war: many wargames do not show areas you haven't explored or where you do not have scout units.
- Morale: many wargames simulate morale, units may break if sufficiently scared (e.g. the Total War games do this) and some may even rush to charge without your command, jeopardizing your plans (e.g. Total War, Warhammer: Dark Omens). In the Close Combat series, your soldiers may become demoralized or even refuse to follow orders if you order them to walk through enemy fire or take too many casualties.
- Some have external unpredictable hazards jeopardizing your unit (e.g. sandworms in Dune II).
And many others. So wargames do attempt to model friction; the problem is that if you make this too extreme, the game stops being fun for the majority of players. The illusion of control is an important aspect of gameplay.
Games work on a tight gameplay loop where the player can have feelings of agency (they can influence what happens at all) and mastery (they can get better at influencing what happens with practice). For this you need to have a relatively predictable relation to actions and outcomes. Having the game randomly lose the orders you give to a unit without any feedback is kinda the opposite of that.
As far as battling it goes, my experience is that you can get a lot of mileage by just spending an extra minute or three making something a little cleaner, more readable, less prone to failure, etc.
Ender is able to see the full battlefield (modulo fog of war) because of ubiquitous faster-than-light sensor technology. But he doesn't control any ships directly. Instead, he issues orders to his subordinates who "directly" control squads of ships.
I've always wondered if anyone's ever made something like this. A co-op war simulation game with instant visibility but divided, frictioned actions. Nothing about it would be technically difficult. It would probably be socially difficult to find enough players.
A commander who can place buildings or resources, and ping locations, and has a birds eye view, and then grunts on the ground trying to do what's actually needed.
Agile supports some uncertainty, but often a mile is taken when an inch is given.
You have to take it in stride though. Developers stay employed for shipping bad or happy-path-only code all the time.
>Is friction important to individuals? Do I benefit from thinking about friction on a project, even if nobody else on my team does?
Even if you were to eliminate a lot of friction, the profit would go to the business anyway.
The military has different incentives, of course.
It's nice to see a more general theory out there.
See my other comment - lots of real military command training involves the trainees issuing orders to subordinates (role players) who interact with the simulation.
> It would probably be socially difficult to find enough players.
Military training finds them by using real soldiers as role-players (understanding how to handle an order is a useful secondary training effect) and there are also loads of ex-soldiers who will happily (for a small consultancy fee) support an exercise for a few days.
I think the tradeoff is practically mandatory for modern armies. The high mobility they require just to avoid artillery strikes and engagements with armor makes top down command impossible to implement in a symmetric conflict.
It’s like John Lennon said:
“Life is what happens while you are busy making other plans.”
Instead of trying to eliminate or stigmatize it, it can be more productive to think of it as a creative input into your static system which can be harnessed for unexpected good.
(And personally I would rather live by the worldview of Lennon than a 19th century German general.)
Thriving as a SWE in a medium-to-big company is not about algorithms and data structures, it is about coping with and recovering from environment breakages, and having the skills to diagnose and fix the environments that you were forced to adopt last quarter and by this quarter are deprecated.
Our solution was that at least once a month we had a story to upgrade deps. But as each new person got the assignment they would immediately ask the question, “but upgrade what?” I didn’t have enough information at that point to care, so I told them to just pick something. Our dep pool wasn’t that big and any forward progress was reducing the total backlog so I figured they would pick something they cared about, and with enough eyeballs we would collectively care about quite a bit of the project.
Now part of the reason this ranked a story is that we were concerned about supply chain attacks on this project, so it wasn’t just a matter of downloading a new binary and testing the code. You also had to verify the signature of the library and update a document and that was a process that only a couple of us had previously used.
An inch or a mile, over time and with local information only it has you running around in an ant mill pattern[0]. I've seen my share of such projects going nowhere fast.
> Friction matters more over large time horizons and large scopes, simply because more things can go wrong.
“Scope” is doing a lot of heavy lifting here and I have met so many people who don’t get it that I find it dangerous to sum things up thusly. There’s a nonlinear feedback loop here that is the proverbial snake in the grass. Many people in your org think of incidents per unit of time, not per unit of work. If you have a hundred developers the frequency of a 1% failure mode becomes a political hot potato.
When managers or QA complain that something is happening “all the time” they mean it figuratively. I’ve seen it happen for multiple people for an event that happens on average once a month, but one time happened twice in a week, and that seems to be the nexus for the confirmation bias starting.
If you have a big enough drive array you will need to order new disks “all the time” and someone will question if you’ve picked a shitty brand because they personally have only experienced one dead drive in their professional life. It’s because humans are terrible at statistics.
Now as to the friction of someone leaving to go home (“don’t deploy on Friday”), this also a psychological problem, not friction.
The problem isn’t people going home. The problem is people rationalizing that it’s safe to leave or skip a check. They are deluding themselves and their coworkers that everything is fine and their evening plans don’t need to be interrupted. You can have the same problem on a Tuesday morning if there’s a going away lunch for a beloved coworker. Time constraints create pressure to finish, and pressure creates sloppiness, complacency. (It’s one of the reasons I prefer Kanban to Scrum.)
Don’t start things you can’t finish/undo at your leisure because you will trick yourself into thinking you have met our standards of quality when you have not. As Feynman said, the trick is not to fool yourself, and you are the easiest person to fool.
I have to dig a lot or try to bring a problem into N.A. office hours before I see how much rote work is required to do a task and it’s almost always shockingly high. We write software for a living. Nobody should be running a static run book. We should be converting it to code.
I’ve often suspected it’s a job security thing.
Games are entertainment, and as with a novel or film, the authors pick and choose to get the level of verisimilitude they think the player/reader/viewer might want. Who wants to take an in-game break to go to the bathroom? When you pick something up (extra ammo) it's instantaneous -- and how come there are so many prefilled magazines lying around anyway? And when you get shot your shoulder doesn't hurt and you don't spend any time in the hospital.
The responsible thing to do is to create a new service and then write wrappers that emulate the old service's interface and business logic, before finally turning off the old service at some point in the distant future.
But it's more profitable to make a shiny new service and end support for the old one. Capture the profits and pass the costs on as externalities to developers.
This may seem like a small inconvenience, but I have watched basically all of the software that I have ever written over a career become unrunnable without significant modification. The friction of keeping up with deprecation has expanded to take up almost all of my time now. In other words, the rate of deprecation determines the minimum team size. Where once 1-3 people could run startups, now it's perhaps 5-10 or more. It's taken the fun out of it. And if it's not fun anymore, seriously, what is the point.
Take the following sentences for example.
> If people have enough autonomy to make locally smart decisions, then they can recover from friction more easily.
Having autonomy has no relationship to recovering from friction more easily. Any why would autonomy cause one to make locally smart decision? The person having the autonomy might be the one causing the friction in the first place and might also be the one making bad decisions.
> Friction compounds with itself: two setbacks are more than twice as bad as one setback. This is because most systems are at least somewhat resilient and can adjust itself around some problem, but that makes the next issue harder to deal with.
Why would being resilient to one type of problem cause not being resilient to another type of problem? And why would this cause the friction to compound itself?
Incidentally, ChatGPT does produce an equally (if not more) plausible article when I ask it to produce an article on software friction.
I'm curious. Can you give a few examples of such Apple services?
In my experience, it is simplification that reduces friction: accepting constraints and limitations, focusing code and architecture. Removing degrees of freedom reduces execution risk exposure.
The main feature is the working state: how to keep that accurate, working, and and replicable. Avoid long transactions, splayed-out invariants, backup intervals, complex restart procedures - any exposure to being outside the working state, or in some provisional or pending state.
Memoir 44 divides the board into three segments (a center and two flanks) and your cards to issue orders always apply to a specific segment (e.g. right flank). Lacking the cards in your hand to issue the orders you might want simulates those orders not making to the front lines.
Undaunted explicitly has Fog of War cards which you can't do anything with. They gum up your deck and simulate that same friction of imperfect comms.
Atlantic Chase [3], a more complex game, uses a system of trajectories to obscure the exact position of ships and force you to reason about where they might be on any given turn. The Hunt [4] is a more accessible take on the same scenario (the Hunt for the Graf Spee) that uses hidden movement for its friction.
I don't know how many of these ideas leap across to computer games, but designing friction into the experience has been a part of tabletop wargames for a long time.
[1]: https://boardgamegeek.com/boardgame/10630/memoir-44
[2]: https://boardgamegeek.com/boardgame/268864/undaunted-normand...
[3]: https://boardgamegeek.com/boardgame/251747/atlantic-chase-th...
>Even if you were to eliminate a lot of friction, the profit would go to the business anyway.
At the intersection of software development, the military, and whether or not friction is important to individuals... I'm reminded of the USDS, and IIRC some of their work to improve workflows and discoverability around (specific) VA benefits.
If you've ever listened to vets talking about the VA, they're rarely complimentary about it, frequently complaining about how hard it is to find the entry point to get the needed benefits, and how hard navigating process is after the entry point is found.
Reducing that friction means more benefits are exercised, at a higher cost to the government. OTOH, maybe the overall cost is lower, if fewer phonecalls are answered explaining how to do a thing, and fewer forms are filled out justifying a thing.
You shouldn't even be able to watch the action in detail, Total War style, as you might have a hill, some messengers and low power binoculars. Games have attempted to copy this, but it's a curiosity, not something that brings sales
My surprise is that neither discussion really leans in on the metaphor. Friction, as a metaphor, is really strong as the way you deal with things vastly changes the more mature a technology is. Consider how much extra lubricant is necessary in early combustion engines compared to more modern electric motors.
More, as you cannot always hope to eliminate the external cause of friction, you can design around it in different ways. Either by controlling what parts of a system are more frequently replaced , or by eliminating the need for them entirely, if and when you can. You can trace the different parts of a modern car to see how this has progressed.
Indeed, the modern car is a wonderful exploration, as we had some technologies that came and went entirely to deal with operator involvement. Manual transmissions largely were replaced by automatic for a variety of reasons, not the least of which was wear and tear on the clutch. And it seems to be going away entirely due to the nature of electric motors being so different from combustion ones?
I've learned about this term from the economics side rather than the military side. It's all the hidden factors that make things more expensive. Transaction costs. I do think this is a good analogy for "drag" in software development, something along the lines of "technical debt".
Also, a lot of auto transmission approaches use the clutch behind the scenes, at least in the older models. But, I am nitpicking here at the analogy being transferred to the clutch system.
I fully agree otherwise that the friction is the best term to describe what is happening across the system and within social interactions.
* Don't promise release dates.
* Stick with known tools/libraries/frameworks where practical.
* Automate (with comments/documentation) what you can. Checklists for everything else.
* Prioritise fixing bugs over adding new features.
The original Prussian Kriegspiel involved opposing players being in different rooms having information conveyed to them by an umpire (must have been a lot of work for the umpire).
The wargames used in the Western approaches to train WWII convoy commanders made players look through slots to restrict what they could see.
Computer wargames like 'Pike and Shot' often won't show you units unless they are visible to your troops. Also your control over units is limited (cavalry will often charge after defeated enemies of their own accord).
There is no time of day when you can hold a meeting that doesn't piss absolutely everyone off. There are only times when you piss one group off more than the other.
My assertion doesn't lean on "bad architecture," as I feel that there are just different choices and tradeoffs. I do think you should often times look for improvements in the tech you are working with instead of replacements. Replacing tech can, of course, be the correct choice. Often, it is a death knell for the people that are getting replaced. We solve that at societal levels in ways that are different from how you solve them at the local technical choice level.
https://developer.apple.com/documentation/assetslibrary/alas...
Looks like maybe they've slowed down on their deprecations a bit:
https://developer.apple.com/documentation/ios-ipados-release...
https://developer.apple.com/documentation/ios-ipados-release...
It was rough 20 years ago with the Carbon to Cocoa transition, Objective-C to Swift, CoreGraphics to OpenGL to Metal, etc. Always a moving target, never cross-platform. It's all so opposite to how I would do things. I remember when the US national debt was $3 trillion in the 80s, now that's Apple's market cap. Makes it hard to focus and make rent these days when so many other people are rolling in dough.
For anyone curious, with AWS I get notices every few months about some core service or configuration option that's getting discontinued in 6 months. Last year it was launch configurations for ECS (maybe the EC2 portion) that had to be migrated to launch templates (can't recall exactly). A deploy failed before I could finish a large feature I had been working on, which caused me to drop everything, which led to overlapping merges in git and associated headaches that set us back weeks. I should have been ahead of that, but it couldn't have come at a worse time.
In some of them you have a single person who is like a commander of the whole fight who can set orders to each squad. But since only one person can be that and many people want to be, I don't think they kept that feature in for long.
But it's the kind of game where I think if you had a big group of friends with a chain of command and good communication you could easily win any match against an otherwise unorganized enemy, even if their individuals are better players.
It's one reason why I stopped playing, it's the kind of metagame I can't get into without dedicating tons of time and communicating with others. I just want to fly ship and go brrt without fearing other players or having to cooperate with them.
The problem with real friction is that, even if you did everything perfectly, orders may still not make it to the unit that has to execute them or they may do something else because of reasons neither of you foresaw, or the enemy forces you saw on the minimap are only half the forces that are actually there. Imagine if you were playing some shooter and randomly on 25% of the time your controller does not respond to inputs at all and another random 25% of the time the inputs get reversed. That would be a super frustrating game to play.
I'd enjoy hearing any comments about this--true, false, true-but, etc.!
At least that's how I remember it...
https://sinovi.uk/articles/amazon-ec2-launch-configurations-...
My complaint is that all service providers should provide a migration wizard to convert to and from configurations that do basically the same thing.
Frameworks like Laravel emphasize the importance of migrations for database schemas:
https://laravel.com/docs/migrations
I use that concept constantly in my work for backwards compatibility, but basically never see it from service providers, which I find sad and somewhat irresponsible or at least discourteous.