←back to thread

Against Best Practices

(www.arp242.net)
279 points ingve | 2 comments | | HN request time: 0.398s | source
Show context
chthonicdaemon ◴[] No.42172014[source]
Rules travel further than reasons.

The problem is that a lot of true things in the world are counter-intuitive. So insisting that all the rules "make sense" in an immediate way is clearly a non-starter. In the safety industry there are many examples of best practices that are bred from experience but end up being counter-intuitive to some. For instance, it might not make intuitive sense that a pilot who has gone through a take-off procedure thousands of times needs a checklist to remember all the steps, but we know that it actually helps.

It's hard because there is usually some information loss in summarisation, but we also have limited memory, so we can't really expect people to remember every case study that led to the distilled advice.

As a chemical engineer by training, though, I have constantly been amazed at how resistant software people are to the idea that their industry could benefit from the kind of standardisation that has improved my industry so much.

replies(2): >>42173230 #>>42176334 #
1. burnt-resistor ◴[] No.42173230[source]
It will never happen outside of limited industries because it would appear to be a loss of "freedom". I think the current situation creates an illusory anarchist freedom of informality that leads to sometimes proprietary lock-in, vulnerabilities, bugs, incompatibility churn, poorly-prioritized feature development, and tyranny of chaos and tech debt.

There are too many languages, too many tools, too many (conflicting) conventions (especially ones designed by committee), and too many options.

Having systems, tools, and components that don't change often with respect to compatibility and are formally-verifiable far beyond the rigor of seL4 such that they are (basically) without (implementation) error would be valuable over having tools lack even basic testing or self-tests, lack digital signatures that would prove chain-of-custody, and being able to model and prove a program or library to a level such that its behavior can be completely checked far more deeply in "whitebox" and "blackbox" methods for correctness would prove that some code stand the test of time. By choosing lesser numbers of standard language(s), tool(s), and component(s) it makes it cheaper and easier to attempt to do such.

Maybe in 100 years, out of necessity, there will be essentially 1 programming language that dominates all others (power law distribution) for humans, and it will be some sort of formal behavioral model specification language that an LLM will generate tests and machine code to implement, manage, and test against.

replies(1): >>42175042 #
2. BeFlatXIII ◴[] No.42175042[source]
I disagree slightly here. There may be one (1) dominant formal language that's used as the glue code that gets run on machines and verified, but it will have numerous font-end languages that compile into it, for ease of typing and abstraction/domain fit.