←back to thread

205 points michidk | 8 comments | | HN request time: 0.853s | source | bottom
1. dextrous ◴[] No.41836789[source]
I am a C/C++ dev learning Rust on my own, and enjoying it. I am finally starting to enjoy the jiu jitsu match with the compiler/borrow-checker and the warm “my code is safe” afterglow … but I have a question for the more experienced Rust devs out there, particularly in light of the OP’s observation about “lots of unsafe” in the Rust embedded realm (which makes sense).

If your Rust project leans heavily on unsafe code and/or many libraries that use lots of unsafe, then aren’t you fooling yourself to some degree; i.e. trusting that the unsafe code you write or that written by the 10 other people who wrote the unsafe libs you’re using is ok? Seems like that tosses some cold water on the warm afterglow.

replies(3): >>41836851 #>>41836872 #>>41846831 #
2. danhau ◴[] No.41836851[source]
Yes, safe Rust is only as safe as the underlying unsafe code is.

The power of unsafe is that it‘s opt-in, making the surface area of „dangerous“ code smaller, more visible and easier to reason about.

As long as the unsafe parts are safe, you can rest assured that the safe parts will be safe too.

replies(2): >>41837019 #>>41839052 #
3. Ygg2 ◴[] No.41836872[source]
> If your Rust project leans heavily on unsafe code and/or many libraries that use lots of unsafe, then aren’t you fooling yourself to some degree

That's why every unsafe block needs a SAFETY block.

Is using vec.get_unchecked(6) safe? No. Is it safe for a vector that will under all circumstances (i.e. invariant) have exactly 64 element. Yes.

As long as for all possible inputs in safe function your SAFETY block holds, that code is considered safe.

4. iTokio ◴[] No.41837019[source]
Another way to see the benefit of this approach is that if you have a memory violation, then you only have to look in the unsafe blocks.

So, yes the less numerous they are, the more you gain from it.

replies(1): >>41843936 #
5. throwawaymaths ◴[] No.41839052[source]
> As long as the unsafe parts are safe, you can rest assured that the safe parts will be safe too.

That is not true. It is possible to have two pieces of validated unsafe code that are "safe" in isolation but when you use them in the same codebase, create something unsafe. This is especially true in embedded contexts, where you are often writing code that touches fixed memory offsets, and other shared globals like peripherals.

replies(1): >>41843178 #
6. tialaramex ◴[] No.41843178{3}[source]
In some cases you might have the excuse that, well, you did say on the tin not to do this with the unsafe element. For example if I use Bob's "I need exclusive control of GPIOs 2, 3 and 6" and also Kate's "I need exclusive control of GPIOs 1, 2 and 4" unsafe code, then it's my fault, they did both tell me this requirement and they clash.

But in general this is specifically a bug in the unsafe code. The Rustonomican is very clear that it's not the safe code's fault that your unsafe code doesn't work. In the scenarios with conflicting libraries I guess it's the fault of somebody who linked conflicting libraries, but it's definitely never the safe code.

7. inahga ◴[] No.41843936{3}[source]
> Another way to see the benefit of this approach is that if you have a memory violation, then you only have to look in the unsafe blocks.

Not really. Safety is non-local. It is possible to break unsafe code by feeding inputs from safe Rust that don't uphold the invariants that make the unsafe code safe. So it's not enough to look in the unsafe blocks--you have to consider the all the contexts that invoke the unsafe code.

See https://doc.rust-lang.org/nomicon/working-with-unsafe.html, and https://notgull.net/cautionary-unsafe-tale/ for a practical example.

8. dannymi ◴[] No.41846831[source]
>If your Rust project leans heavily on unsafe code and/or many libraries that use lots of unsafe, then aren’t you fooling yourself to some degree; i.e. trusting that the unsafe code you write or that written by the 10 other people who wrote the unsafe libs you’re using is ok? Seems like that tosses some cold water on the warm afterglow.

It's true is that you have to trust your dependencies (unsafe or not). Not needing to trust at all that developers know what they are doing was never a thing a programming language could provide. We can only carve out some specific properties that we can machine-check in a limited way.

There are limits on what a type system can do (Rice's theorem, Gödel's incompleteness theorem), and in addition there are limits on what a non-dependent type system can do.

Therefore, you either need unsafe (something that adds operations that the type system doesn't model) or you can't write some perfectly OK programs.

Basically, the Rust type system is a toy model of your computer's abilities and the domain you want to model. And so is any other type system. The type systems of systems languages at least have some inkling of the actual machine--which is not necessarily the case in non-systems languages.

Ask a computer engineer what he thinks about this toy model's misconceptions, like that reading and writing from the same location via the memory bus affect the same thing, or reading the same memory location twice in a row when there's only one cpu is guaranteed to give you the same value, or that reading a memory location can't change it, or that writing to some memory location can't automatically change some other aliased memory location, or that writing some memory location from cpu 1 means cpu 2 can immediately read that new value out etcetc. I could go on (memory barriers, cache coherency, paging, ...).

This is not specific to Rust.

I'm not sure why we are having new "unsafe" discussions lately. Java and .NET have unsafe as well. Didn't we have that discussion already in ca. year 2000 and everyone arranged themselves with it? What changed? Are there new arguments?

If you want to have some empirical tests if the unsafe blocks are broken, run your program under miri.

Now you could say that you could just make better and better type systems that encompass everything as it really is. To that I say (1) you can't do that in principle and (2) if you could, humans wouldn't be able to practically use it anymore and (3) It would be too much effort for something that only a tiny minority of programs need in some places. The toy model is pretty good 95% of the time!