←back to thread

169 points signa11 | 1 comments | | HN request time: 0s | source
Show context
smodo ◴[] No.41875908[source]
I’m not very well versed in kernel development. But I am a Rust dev and have observed the discussion about Rust in Linux with interest… Having said that, this part of the article has me baffled:

>> implementing these features for a smart-pointer type with a malicious or broken Deref (the trait that lets a programmer dereference a value) implementation could break the guarantees Rust relies on to determine when objects can be moved in memory. (…) [In] keeping with Rust's commitment to ensuring safe code cannot cause memory-safety problems, the RFC also requires programmers to use unsafe (specifically, implementing an unsafe marker trait) as a promise that they've read the relevant documentation and are not going to break Pin.

To the uninformed this seems like crossing the very boundary that you wanted Rust to uphold? Yes it’s only an impl Trait but still… I can hear the C devs now. ‘We pinky promise to clean up after our mallocs too!’

replies(7): >>41875965 #>>41876037 #>>41876088 #>>41876177 #>>41876213 #>>41876426 #>>41877004 #
foundry27 ◴[] No.41875965[source]
Rust’s whole premise of guaranteed memory safety through compiletime checks has always been undermined when confronted with the reality that certain foundational operations must still be implemented using unsafe. Inevitably folks concede that lower level libraries will have these unsafe blocks and still expect higher level code to trust them, and at that point we’ve essentially recreated the core paradigm of C: trust in the programmer’s diligence. Yeah Rust makes this trust visible, but it doesn’t actually eliminate it in “hard” code.

The punchline here, so to speak, is that for all Rust’s claims to revolutionize safety, it simply(!) formalizes the same unwritten social contract C developers have been meandering along with for decades. The uniqueness boils down to “we still trust the devs, but at least now we’ve made them swear on it in writing”.

replies(10): >>41876016 #>>41876042 #>>41876122 #>>41876128 #>>41876303 #>>41876330 #>>41876352 #>>41876459 #>>41876891 #>>41877732 #
Ar-Curunir ◴[] No.41876459[source]
This is nonsense. Just because some small parts of the code are must be annotated as unsafe doesn’t mean that we’re suddenly back to C land. in comparison, with C the entire codebase is basically wrapped in a big unsafe. That difference is important, because in Rust you can focus your auditing and formal verification efforts on just those small unsafe blocks, whereas with C everything requires that same attention.

Furthermore, Rust doesn’t turn off all checks in unsafe, only certain ones.

,

replies(1): >>41876544 #
uecker ◴[] No.41876544{3}[source]
If you think correctness is only about memory safety, only then you can you can "focus your auditing and formal efforts on just those small unsafe blocks". And this is a core problem of Rust that people think they can do this.
replies(2): >>41876573 #>>41877228 #
1. erik_seaberg ◴[] No.41876573{4}[source]
Memory and concurrency safety need to be the first steps, because how can you analyze results when the computer might not have executed your code correctly as written?