It frustrates me more than it should, I admit, that people always mention Rust when they talk about safety, but never Ada / SPARK. You want formal verification? Use Ada / SPARK. It has been battle-tested. It has been used for critical systems for a really long time now.
(And a compiler being formally verified vs. being able to write formally verified code means two different things.)
For example if someone on GitHub sees that the project is written in Rust, they are automatically going to assume it is safe, incorrectly so. I do not blame them though.
The concept of the borrow checker has been on a simplified version of rust https://people.mpi-sws.org/~dreyer/papers/rustbelt/paper.pdf - work has continued in this area steadily (e.g. see tree borrows)
There's a variety of tools that take rust code and translate it to something a proof system understands, and then checks that it matches a specification. AWS is leading a project to use these to verify the standard library: https://aws.amazon.com/blogs/opensource/verify-the-safety-of...
That seems excessive and tedious.
Concretely: spatial and temporal memory safety are good things, and Rust achieves both. It’s not unique in this regard, nor is it unique in not having a formal definition.
https://stackoverflow.com/questions/75743030/is-there-a-spec...
It also strikes me as extraordinarily unlikely that any formal verification effort will use the existing specification, and not build their own (using their own formal language) as they go.
Then the point is hypocritical.
Runtimes for safe programming languages have been implemented in unsafe languages since the dawn of safe programming languages, basically.
EDIT: I see now that you are the cigarette warning guy. In that case I don’t understand what this coy “I think” speculation is about when you made such a bizarre proclamation on this topic.
Formal verification is verifying a mathematical model of the program correct, according to certain mathematical correctness properties. That model may not actually represent the real world, or the correctness properties may not actually be the properties that you want to guarantee. Famously, there was the formally verified Java LinkedList implementation that was buggy due to the formal model not taking into account integer overflow.
There are parts of the Rust language and standard library that have been formally verified. That helps provide confidence that the basic model, of having safety properties upheld by unsafe core functions, to be sound.
But the big advantage of Rust, especially for everyday open source code that the world widely depends on, is that it strikes a good balance between a language that people actually want to use, and which can provide good practical safety guarantees.
Rust is far, far safer than C and C++. Most other memory-safe languages also are, but trade that off for lower performance, while Rust provides a similar level of performance as C and C++. And additionally, it's a language that people actually want to use for writing a lot of the core infrastructure.
As far as I know, I have never use any open source software written in Ada/SPARK. People just don't do it. The only reason they generally do is if they need to for functional safety reasons, in a high-criticality system, and they find it easier to get their software certified by writing it that way.
And even in safety critical systems, most people find using a subset of C, with restrictive rules, static checkers to check those, code review, and extensive testing, done by independent teams and testing on real hardware with requirements based tests that achieve 100% coverage at the MC/DC level is considered to be the gold standard of verification.
Formal verification can not possibly provide some of the safety guarantees that testing on real hardware based on actual system level requirements can provide.
Formal verification is one tool in a toolbox. It is useful for providing certain guarantees. But it's less important than many other factors for the actual improvements in safety that Rust can provide; by actually being used, in critical software that is exposed to potential network based exploitation, a memory-safe language like Rust makes a much better foundation than C or C++, and more people will actually use it in such a context than Ada/SPARK.
Maybe you should wear a sign on your torso: will make irrational points and then complain about persecution when that is pointed out. I don’t know. It’s just one more idea for when we decide to put all the anti-Rust people in camps.
Isnt that what overhype means? Also no one is saying that Rust is unique in being overhyped. It is true of almost any language worth writing in, including c, lisp, python, haskell, type script etc.
(If there's nothing unique here, it doesn't make sense to single any particular language out. But each language does have unique properties: Python is a great rapid development language, Rust offers zero-cost abstractions for memory safety, etc.)
That is like saying I don't see much overhype about AI from machine learning engineers. I am a ml engineer, and like myself great majority of ml engineers will tell you that there is certainly overhype about the field and do not engage in the overhype. Which is not to say that the field isnt producing some really cool results
> I've yet to see people (incorrectly) extend this to a claim that Rust solves all security problems, which would meet the definition of overhype.
Ive seen plenty of questionable Rust rewrites due to solving security problems
> Python is a great rapid development language
Saying to a Lisper that Python is a great rapid development language is like selling Rust safety to an Ada person :)
Now that Rust is becoming more popular, it would be nice to be able to re-use these formally verified parsers in Rust, where your entire language is memory safe. The formally verified parsers can still be helpful, because the formal verification can ensure that you also won't crash (in safe Rust, you can still crash, you just won't be subject to arbitrary memory corruption).
But just using the C libraries from Rust is unsatisfactory, now you need to go through an unsafe interface which introduces a potential place to introduce new bugs. And there are existing C to Rust translators, but they generate unsafe Rust.
So this demonstrates a way to translate from C to safe Rust, though with constraints on the existing C code. It's both useful in that it means that you can translate some of these already formally verified libraries to safe Rust, and this research could be used as part of later work for a more general tool, that could potentially translate more C code to safe Rust while falling back to unsafe in cases that it can't reason about.
Anyhow, not all academic work like this ends up being used practically in the real world, but some of it can be, or some of it can be put into practice by other tools later on. Rust came about that way; much of its reason for existence is trying to put decades of academic research into a practical, real-world usable language, since there was lots of academic work that had never really been used in industry as much as it should be.