A less baity title might be "Rust pitfalls: Runtime correctness beyond memory safety."
A less baity title might be "Rust pitfalls: Runtime correctness beyond memory safety."
This regularly drives C++ programmers mad: the statement "C++ is all unsafe" is taken as some kind of hyperbole, attack or dogma, while the intent may well be to factually point out the lack of statically checked guarantees.
It is subtle but not inconsistent that strong static checks ("safe Rust") may still leave the possibility of runtime errors. So there is a legitimate, useful broader notion of "safety" where Rust's static checking is not enough. That's a bit hard to express in a title - "correctness" is not bad, but maybe a bit too strong.
You might be talking about "correct", and that's true, Rust generally favors correctness more than most other languages (e.g. Rust being obstinate about turning a byte array into a file path, because not all file paths are made of byte arrays, or e.g. the myriad string types to denote their semantics).
[1] https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html
So I agree with the above comment that the title could be better, but I also understand why the author gave it this title
Containers like std::vector and smart pointers like std::unique_ptr seem to offer all of the same statically checked guarantees that Rust does.
I just do not see how Rust is a superior language compared to modern C++
There’s a great talk by Louis Brandy called “Curiously Recurring C++ Bugs at Facebook” [0] that covers this really well, along with std::map’s operator[] and some more tricky bugs. An interesting question to ask if you try to watch that talk is: How does Rust design around those bugs, and what trade offs does it make?
This results in an ecosystem where safety is opt-in, which means in practice most implementations are largely unsafe. Even if an individual developer wants to proactive about safety, the ecosystem isn't there to support them to the same extent as in rust. By contrast, safety is the defining feature of the rust ecosystem. You can write code and the language and ecosystem support you in doing so rather than being a barrier you have to fight back against.
In C++ (and C#, Java, Go and many other “memory safe languages”), it’s very easy to mess up multithreaded code. Bugs from multithreading are often insanely difficult to reproduce and debug. Rust’s safety guardrails make many of these bugs impossible.
This is also great for performance. C++ libraries have to decide whether it’s better to be thread safe (at a cost of performance) or to be thread-unsafe but faster. Lots of libraries are thread safe “just in case”. And you pay for this even when your program / variable is single threaded. In rust, because the compiler prevents these bugs, libraries are free to be non-threadsafe for better performance if they want - without worrying about downstream bugs.
It seems the bug you are flagging here is a null reference bug - I know Rust has Optional as a workaround for “null”
Are there any pitfalls in Rust when Optional does not return anything? Or does Optional close this bug altogether? I saw Optional pop up in Java to quiet down complaints on null pointer bugs but remained skeptical whether or not it was better to design around the fact that there could be the absence of “something” coming into existence when it should have been initialized
But this can be used with other enums too, and in those cases, having a zero NonZero would essentially transmute the enum into an unexpected variant, which may cause an invariant to break, thus potentially causing memory unsafety in whatever required that invariant.
Most often, the comments come from people who don’t even write much Rust. They either know just enough to be dangerous or they write other languages and feel like it’s a “gotcha” they can use against Rust.
By that standard anything and everything might be tainted as "unsafe", which is precisely GP's point. Whether the unsafety should be blamed on the outside code that's allowed to create a 0-valued NonZero<…> or on the code that requires this purported invariant in the first place is ultimately a matter of judgment, that people may freely disagree about.
The problem, rather, is that there's no implementation of checked iterators that's fast enough for release build. That's largely a culture issue in C++ land; it could totally be done.
This is an issue with the C++ standardization process as much as with the language itself. AIUI when std::optional (and std::variant, which has similar issues) were defined, there was a push to get new syntax into the language itself that would’ve been similar to Rust’s match statement.
However, that never made it through the standardization process, so we ended up with “library variants” that are not safe in all circumstances.
Here’s one of the papers from that time, though there are many others arguing different sides: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p00...
The issue is that this could potentially allow creating a struct whose invariants are broken in safe rust. This breaks encapsulation, which means modules which use unsafe code (like `std::vec`) have no way to stop safe code from calling them with the invariants they rely on for safety broken. Let me give an example starting with an enum definition:
// Assume std::vec has this definition
struct Vec<T> {
capacity: usize,
length: usize,
arena: * T
}
enum Example {
First {
capacity: usize,
length: usize,
arena: usize,
discriminator: NonZero<u8>
},
Second {
vec: Vec<u8>
}
}
Now assume the compiler has used niche optimization so that if the byte corresponding to `discriminator` is 0, then the enum is `Example::Second`, while if the byte corresponding to `discriminator` is not 0, then the enum is `Example::First` with discriminator being equal to its given non-zero value. Furthermore, assume that `Example::First`'s `capacity`, `length`, and `arena` fields are in the in the same position as the fields of the same name for `Example::Second.vec`. If we allow `fn NonZero::new_unchecked(u8) -> NonZero<u8>` to be a safe function, we can create an invalid Vec: fn main() {
let evil = NonZero::new_unchecked(0);
// We write as an Example::First,
// but this is read as an Example::Second
// because discriminator == 0 and niche optimization
let first = Example::First {
capacity: 9001, length: 9001,
arena: 0x20202020,
discriminator: evil
}
if let Example::Second{ vec: bad_vec } = first {
// If the layout of Example is as I described,
// and no optimizations occur, we should end up in here.
// This writes 255 to address 0x20202020
bad_vec[0] = 255;
}
}
So if we allowed new_unchecked to be safe, then it would be impossible to write a sound definition of Vec.I think rust delivers on its safety promise.
For example if I write a C# function which takes a Goose, specifically a Goose, not a Goose? or similar - well, too bad the CLR says my C# function can be called by this obsolete BASIC code which has no idea what a Goose is, but it's OK because it passed null. If my code can't cope with a null? Too bad, runtime exception.
In real C# apps written by an in-house team this isn't an issue, Ollie may not be the world's best programmer but he's not going to figure out how to explicity call this API with a null, he's going to be stopped by the C# compiler diagnostic saying it needs a Goose, and worst case he says "Hey tialaramex, why do I need a Goose?". But if you make stuff that's used by people you've never met it can be an issue.
I feel people took the comparison of rust to c and extrapolated to c++ which is blatantly disingenuous.
That's actually no different to Rust still; if you try, you can pass a 0 value to a function that only accepts a reference (i.e. a non-zero pointer), be it by unsafe, or by assembly, or whatever.
Disagreeing with another comment on this thread, this isn't a matter of judgement around "who's bug is it? Should the callee check for null, or the caller?". Rust's win is by clearly articulating that the API takes non-zero, so the caller is buggy.
As you mention it can still be an issue, but there should be no uncertainty around who's mistake it is.
(Basically, the 'newer' C++ features do help a little with memory safety, but it's still fairly easy to trip up even if you restrict your own code from 'dangerous' operations. It's not at all obvious that a useful memory-safe subset of C++ exists. Even if you were to re-write the standard library to correct previous mistakes, it seems likely you would still need something like the borrow checker once you step beyond the surface level).
However while speaking about Rust language in general, all half-decent Rust developers specify that it's about memory safety. Even the Rust language homepage has only two instances of the word - 'memory-safety' and 'thread-safety'. The accusations of sleaziness and false accusations is disingenuous at best.
The Rust developers I meet are more interested in showing off their creations than in evangelizing the language. Even those on dedicated Rust forums are generally very receptive to other languages - you can see that in action on topics like goreleaser or Zig's comptime.
And while you have already dismissed the other commenter's experience of finding Rust nicer than C++ to program in, I would like to add that I share their experience. I have nothing against C++, and I would like to relearn it so that I can contribute to some projects I like. But the reason why I started with Rust in 2013 was because of the memory-saftey issues I was facing with C++. There are features in Rust that I find surprisingly pleasant, even with 6 additional years of experience in Python. Your opinion that Rust is unpleasant to the programmer is not universal and its detractions are not nonsense.
I appreciate the difficulty in learning Rust - especially getting past the stage of fighting the borrow checker. That's the reason why I don't promote Rust for immediate projects. However, I feel that the knowledge required to get past that stage is essential even for correct C and C++. Rust was easy for me to get started in, because of my background in digital electronics, C and C++. But once you get past that peak, Rust is full of very elegant abstractions that are similar to what's seen in Python. I know it works because I have trained js and python developers in Rust. And their feedback corroborates those assumptions about learning Rust.
> ... with all the other features that create the "if it compiles it probably works" experience
While it's true that Rust's core safety feature is almost exclusively about memory safety, I think it contributes more to the overall safety of the program.
My professional background is more in electronics than in software. So when the Rust borrow checker complains, I tend to map them to nuances of the hardware and seek work-arounds for those problems. Those work-arounds often tend to be better restructuring of the code, with proper data isolation. While that may seem like hard work in the beginning, it's often better towards the end because of clarity and modularity it contributes to the code.
Rust won't eliminate logical bugs or runtime bugs from careless coding. But it does encourage better coding practices. In addition, the strict, but expressive type system eliminates more bugs by encoding some extra constraints that are verified at compile time. (Yes, there are other languages that do this better).
And while it is not guaranteed, I find Rust programs to just work if it compiles, more often than in the other languages I know. And the memory-safety system has a huge role in that experience.
Make those threads access external resources simultaneously, or memory mapped to external writers, and there is no support from Rust type system.
You can still create a mess from logical race conditions, deadlocks and similar bugs, but you won’t get segfaults because you after the tenth iteration forgot to diligently manage the mutex.
Personally I feel that in rust I can mostly reason locally, compared to say Go when I need to understand a global context whenever I touch multithreaded code.
Managing something like that is a design decision of the software being implemented not a responsibility of the language itself.
At least that's my understanding of the situation. Happy to be corrected though.
It's not, though. NonZero<T> has an invariant that a zero value is undefined behavior. Therefore, any API which allows for the ability to create one must be unsafe. This is a very straightforward case.
#include<iostream>
#include<memory>
int main() {
std::unique_ptr<int> null_ptr;
std::cout << *null_ptr << std::endl; // Undefined behavior
}
Clang 20 compiles this code with `-std=c++23 -Wall -Werror`. If you add -fsanitize=undefined, it will print ==1==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55589736d8ea bp 0x7ffe04a94920 sp 0x7ffe04a948d0 T1)
or similar.So this is actually why "no null, but optional types" is such a nice spot in the programming language design space. Because by default, you are making sure it "should have been initialized," that is, in Rust:
struct Point {
x: i32,
y: i32,
}
You know that x and y can never be null. You can't construct a Point without those numbers existing.By contrast, here's a point where they could be:
struct Point {
x: Option<i32>,
y: Option<i32>,
}
You know by looking at the type if it's ever possible for something to be missing or not.> Are there any pitfalls in Rust when Optional does not return anything?
So, Rust will require you to handle both cases. For example:
let x: Option<i32> = Some(5); // adding the type for clarity
dbg!(x + 7); // try to debug print the result
This will give you a compile-time error: error[E0369]: cannot add `{integer}` to `Option<i32>`
--> src/main.rs:4:12
|
4 | dbg!(x + 7); // try to debug print the result
| - ^ - {integer}
| |
| Option<i32>
|
note: the foreign item type `Option<i32>` doesn't implement `Add<{integer}>`
It's not so much "pitfalls" exactly, but you can choose to do the same thing you'd get in a language with null: you can choose not to handle that case: let x: Option<i32> = Some(5); // adding the type for clarity
let result = match x {
Some(num) => num + 7,
None => panic!("we don't have a number"),
};
dbg!(result); // try to debug print the result
This will successfully print, but if we change `x` to `None`, we'll get a panic, and our current thread dies.Because this pattern is useful, there's a method on Option called `unwrap()` that does this:
let result = x.unwrap();
And so, you can argue that Rust doesn't truly force you to do something different here. It forces you to make an active choice, to handle it or not to handle it, and in what way. Another option, for example, is to return a default value. Here it is written out, and then with the convenience method: let result = match x {
Some(num) => num + 7,
None => 0,
};
let result = x.unwrap_or(0);
And you have other choices, too. These are just two examples.--------------
But to go back to the type thing for a bit, knowing statically you don't have any nulls allows you to do what some dynamic language fans call "confident coding," that is, you don't always need to be checking if something is null: you already know it isn't! This makes code more clear, and more robust.
If you take this strategy to its logical end, you arrive at "parse, don't validate," which uses Haskell examples but applies here too: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
OTOH with Rust you'd have to violate its safety guarantees, which if I understand correctly triggers UB.
Yes, your parent's example would be UB, and require unsafe.
I don’t think that’s true.
External thread-unsafe resources like that are similar in a way to external C libraries: they’re sort of unsafe by default. It’s possible to misuse them to violate rust’s safe memory guarantees. But it’s usually also possible to create safe struct / API wrappers around them which prevent misuse from safe code. If you model an external, thread-unsafe resource as a struct that isn’t Send / Sync then you’re forced to use the appropriate threading primitives to interact with the resource from multiple threads. When you use it like that, the type system can be a great help. I think the same trick can often be done for memory mapped resources - but it might come down to the specifics.
If you disagree, I’d love to see an example.
You can control safety as much as you feel like from Rust side, there is no way to validate that the data coming into the process memory doesn't get corrupted by the other side, while it is being read from Rust side.
Unless access is built in a way that all parties accessing the resource have to play by the same validation rules before writting into it, OS IPC resources like shared mutexes, semaphores, critical section.
The kind of typical readers-writers algorithms in distributed computing.