I do not find this argumentation compelling:
> We’ve created architectures so tangled, so dependent on invisible connections and implicit behaviors, that we need automated tools just to verify that our programs won’t crash in obvious ways.
> Type checking, in other words, is not a solution to complexity—it’s a confession that we’ve created unnecessary complexity in the first place.
I don't buy the idea that (1) we can always easily identify unnecessary complexity, separating it from necessary complexity; (2) reduce the system to just the necessary complexity, such that: (3) the remaining necessary complexity is always low, and easy to understand (i.e. not "incomprehensible to human reasoning", having few or no "implicit behaviors", so that we don't "need automated tools just to verify that" it "won't crash in obvious ways").
The idea that we can implement arbitrary systems to handle our complex requirements, while keeping the implementation complexity capped below a constant simply by not admitting any "unnecessary complexity" is not believable.
This is simply not a mature argument about software engineering and therefore not an effective argument against type checking.
> The problem isn’t that software is inherently more complex than hardware.
It positively is. The built electronic circuit closely mimics the schematic. The behavior of the electronic circuit consists of local states that are confined to the circuit components. For instance, in the schematic you have a capacitor and a neighboring resistor, as symbols? Then in the manufactured circuit, you have the same: a capacitor component connected to a neighboring resistor component. Now those things have a run-time behavior in response to the changing voltage applied. But that run-time behavior is locally contained in those components. All we need to imagine the behavior is to take the changing quantities like the capacitor's charge, current through the capacitor, voltage drop across the resistor and superimpose these on the components.
Software source code is a schematic not for compiled code, but for run-time execution. The run-time execution has an explosive state space. What is one function in the source code can be invoked recursively and concurrently so that there are thousands of simultaneous activations of it. That's like a schematic in which one resistor a circuit with 50,000 resistors, and that circuit is constantly changing; first we have to know what the circuit looks like at any time t and then think about the values of the components and then their currents and voltages. You might understand next to nothing about what is going on at run-time from the code.
Circuits havre no loops, no recursion, no dynamic set data structures with changing contents, no multiple instantiation of anything. The closest thing you see to a loop in a schematic is something like dozens of connections annotated as a 'bus' that looks like one wire. Some hardware is designed with programming-like languages like VHDL and whatnot that allow circuits with many repeating parts to be condensed. It's still nothing like software.
I mean, show me a compiler, web browser, or air traffic control system, implemented entirely in hardware.
> The evidence for this alternative view is hiding in plain sight. UNIX pipelines routinely compose dozens of programs into complex workflows, yet they require no type checking at the transport layer.
Obligatory: where is the web browser, Mobile OS, video game, heat and lung machine, CNC router, ... written with shell scripts and Unix pipes?
Even within the domain where they are applicable, Unix pipes do not have a great record of scalability, performance and reliability.
In some narrowly defined, self-contained data processing application, they can shine, especially against older languages. E.g. Doug McIlroy writing a tiny shell script to solve the same thing as Knuth's verbose Pascal solution.