Honestly I think that's probably the correct way to write high reliability code.
Do you have any evidence for "probably"?
It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`, for example. Any human who tells you they have not made this mistake is a liar.
Unless there was a second uncommanded acceleration problem with Toyotas, my understanding is that it was caused by poor mechanical design of the accelerator pedal that caused it to get stuck on floor mats.
In any case, when we're talking about safety critical control systems like avionics, it's better to abstract away the actual act of typing code into an editor, because it eliminates a potential source of errors. You verify the model at a higher level, and the code is produced in a deterministic manner.
See https://www.safetyresearch.net/toyota-unintended-acceleratio...
"I know for a fact that Italian cooks generate spaghetti, and the deceased's last meal contained spaghetti, therefore an Italian chef must have poisoned him"
The Simulink Coder tool is a piece of software. It is designed and implemented by humans. It will have bugs.
Autogenerated code is different from human written code. It hits soft spots in the C/C++ compilers.
For example, autogenerated code can have really huge switch statements. You know, larger than the 15-bit branch offset the compiler implementer thought was big enough to handle any switch-statement any sane human would ever write? So now the switch jumps backwards instead when trying to get the the correct case-statement.
I'm not saying that Simulink Coder + a C/C++ compiler is bad. It might be better than the "manual coding" options available. But it's not 100% bug free either.
Nobody said it was bug free, and this is a straw man argument of your own construction.
Using Autocode completely eliminates certain types of errors that human C programmers have continued to make for more than half a century.
The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.
All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.
In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.
But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.
No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.
It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.
That's a classic bias: Comparing A and B, show that B doesn't have some A flaws. If they are different systems, of course that's true. But it's also true that A doesn't have some B flaws. That is, what flaws does Autocode have that humans don't?
The fantasy that machines are infallible - another (implicit) argument in this thread - is just ignorance for any professional in technology.
The main flaw of autocode is that a human can't easily read and validate it, so you can't really use it as source code. In my experience, this is one of the biggest flaws of these types of systems. You have to version control the file for whatever proprietary graphical programming software generated the code in the first place, and as much as we like to complain about git, it looks like a miracle by comparison.
It's an interesting question and point, but those are two different things and there is no reason to think you'll get the same results. Why not compile from natural language, if that theory is true?
From https://news.ycombinator.com/item?id=45562815 :
> awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
From "Safe C++ proposal is not being continued" (2025) https://news.ycombinator.com/item?id=45237019 :
> Safe C++ draft: https://safecpp.org/draft.html
Also there are efforts to standardize safe Rust; rust-lang/fls, rustfoundation/safety-critical-rust-consortium
> How does what FLS enables compare to these [unfortunately discontinued] Safe C++ proposals?
I admit that's mostly philosphical. But I think saying 'C can autogenerate reliable assembly, therefore a specification can autogenerate reliable C' is also about two different problems.
Which therein lies the clue. They wrote software that was simply unmaintainable. Autogenerated code isnt any better.