Most active commenters
  • pjmlp(6)
  • josephg(4)
  • rcxdude(4)
  • steveklabnik(4)
  • int_19h(3)
  • goku12(3)

←back to thread

Pitfalls of Safe Rust

(corrode.dev)
168 points pjmlp | 65 comments | | HN request time: 0.002s | source | bottom
Show context
nerdile ◴[] No.43603402[source]
Title is slightly misleading but the content is good. It's the "Safe Rust" in the title that's weird to me. These apply to Rust altogether, you don't avoid them by writing unsafe Rust code. They also aren't unique to Rust.

A less baity title might be "Rust pitfalls: Runtime correctness beyond memory safety."

replies(1): >>43603739 #
1. burakemir ◴[] No.43603739[source]
It is consistent with the way the Rust community uses "safe": as "passes static checks and thus protects from many runtime errors."

This regularly drives C++ programmers mad: the statement "C++ is all unsafe" is taken as some kind of hyperbole, attack or dogma, while the intent may well be to factually point out the lack of statically checked guarantees.

It is subtle but not inconsistent that strong static checks ("safe Rust") may still leave the possibility of runtime errors. So there is a legitimate, useful broader notion of "safety" where Rust's static checking is not enough. That's a bit hard to express in a title - "correctness" is not bad, but maybe a bit too strong.

replies(5): >>43603865 #>>43603876 #>>43603929 #>>43604918 #>>43605986 #
2. whytevuhuni ◴[] No.43603865[source]
No, the Rust community almost universally understands "safe" as referring to memory safety, as per Rust's documentation, and especially the unsafe book, aka Rustonomicon [1]. In that regard, Safe Rust is safe, Unsafe Rust is unsafe, and C++ is also unsafe. I don't think anyone is saying "C++ is all unsafe."

You might be talking about "correct", and that's true, Rust generally favors correctness more than most other languages (e.g. Rust being obstinate about turning a byte array into a file path, because not all file paths are made of byte arrays, or e.g. the myriad string types to denote their semantics).

[1] https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html

replies(3): >>43604067 #>>43604190 #>>43604779 #
3. quotemstr ◴[] No.43603876[source]
Safe Rust code doesn't have accidental remote code execution. C++ often does. C++ people need to stop pretending that "safety" is some nebulous and ill-defined thing. Everyone, even C++ people, shows perfectly damn well what it means. C++ people are just miffed that Rust built it while they slept.
replies(2): >>43604117 #>>43604960 #
4. NoTeslaThrow ◴[] No.43603929[source]
If english had static checks this kind of runtime pedantry would be unnecessary. Sometimes it's nice to devote part of your brain to productivity rather than checking coherence.
replies(1): >>43604060 #
5. pjmlp ◴[] No.43604067[source]
Mostly, there is a sub culture that promotes to taint everything as unsafe that could be used incorrectly, instead of memory safety related operations.
replies(2): >>43604325 #>>43605297 #
6. surajrmal ◴[] No.43604117[source]
Accidental remote code execution isn't limited to just memory safety bugs. I'm a huge rust fan but it's not good to oversell things. It's okay to be humble.
replies(1): >>43604340 #
7. brundolf ◴[] No.43604190[source]
Formally the team/docs are very clear, but I think many users of Rust miss that nuance and lump memory safety together with all the other features that create the "if it compiles it probably works" experience

So I agree with the above comment that the title could be better, but I also understand why the author gave it this title

replies(1): >>43608258 #
8. dymk ◴[] No.43604325{3}[source]
That subculture is called “people who haven’t read the docs”, and I don’t see why anyone would give a whole lot of weight to their opinion on what technical terms mean
replies(3): >>43604715 #>>43605171 #>>43606488 #
9. dymk ◴[] No.43604340{3}[source]
RCEs are almost exclusively due to buffer overruns, sure there are examples where that’s not the case but it’s not really an exaggeration or hyperbole when you’re comparing it to C/C++
replies(1): >>43604711 #
10. thayne ◴[] No.43604711{4}[source]
Almost exclusively isn't the same as exclusively.

Notably the log4shell[1] vulnerability wasn't due to buffer overruns, and happened in a memory safe language.

[1]: https://en.m.wikipedia.org/wiki/Log4Shell

replies(2): >>43605126 #>>43605170 #
11. arccy ◴[] No.43604715{4}[source]
I don't see why people would drop the "memory" part of "memory safe" and just promote the false advertising of "safe rust"
replies(1): >>43605178 #
12. ampere22 ◴[] No.43604779[source]
If a C++ developer decides to use purely containers and smart pointers when starting a new project, how are they going to develop unsafe code?

Containers like std::vector and smart pointers like std::unique_ptr seem to offer all of the same statically checked guarantees that Rust does.

I just do not see how Rust is a superior language compared to modern C++

replies(5): >>43604855 #>>43604887 #>>43604895 #>>43607240 #>>43612736 #
13. criddell ◴[] No.43604855{3}[source]
C++ devs need to understand the difference between:

   Vec1[0];
   Vec1.at(0);
Even the at method isn’t statically checked. If you want static checking, you probably need to use std::array.
replies(1): >>43608672 #
14. ddulaney ◴[] No.43604887{3}[source]
Unfortunately, operator[] on std::vector is inherently unsafe. You can potentially try to ban it (using at() instead), but that has its own problems.

There’s a great talk by Louis Brandy called “Curiously Recurring C++ Bugs at Facebook” [0] that covers this really well, along with std::map’s operator[] and some more tricky bugs. An interesting question to ask if you try to watch that talk is: How does Rust design around those bugs, and what trade offs does it make?

[0]: https://m.youtube.com/watch?v=lkgszkPnV8g

replies(1): >>43605244 #
15. phoenk ◴[] No.43604895{3}[source]
The commonly given response to this question is two-fold, and both parts have a similar root cause: smart pointers and "safety" being bolted-on features developed decades after the fact. The first part is the standard library itself. You can put your data in a vec for instance, but if you want to iterate, the standard library gives you back a regular pointer that can be dereferenced unchecked, and is intended to be invalidated while still held in the event of a mutation. The second part is third party libraries. You may be diligent about managing memory with smart pointers, but odds are any library you might use probably wants a dumb pointer, and whether or not it assumes responsibility for freeing that pointer later is at best documented in natural language.

This results in an ecosystem where safety is opt-in, which means in practice most implementations are largely unsafe. Even if an individual developer wants to proactive about safety, the ecosystem isn't there to support them to the same extent as in rust. By contrast, safety is the defining feature of the rust ecosystem. You can write code and the language and ecosystem support you in doing so rather than being a barrier you have to fight back against.

replies(2): >>43604997 #>>43605386 #
16. bigstrat2003 ◴[] No.43604918[source]
The problem with the title is that the phrase "pitfalls of safe rust" implies that these pitfalls are unique to, or made worse by, safe rust. But they aren't. They are challenges in any programming language, which are no worse in rust than elsewhere.

It's like if I wrote an article "pitfalls of Kevlar vests" which talked about how they don't protect you from being shot in the head. It's technically correct, but misleading.

17. yjftsjthsd-h ◴[] No.43604960[source]
Research I've seen seems to say that 70-80% of vulnerabilities come from memory safety problems[0]. Eliminating those is of course a huge improvement, but is rust doing something to kill the other 20-30%? Or is there something about RCE that makes it the exclusive domain of memory safety problems?

[0] For some reason I'm having trouble finding primary sources, but it's at least referenced in ex. https://security.googleblog.com/2024/09/eliminating-memory-s...

replies(1): >>43605450 #
18. josephg ◴[] No.43604997{4}[source]
Yep. Safe rust also protects you from UB resulting from incorrect multi-threaded code.

In C++ (and C#, Java, Go and many other “memory safe languages”), it’s very easy to mess up multithreaded code. Bugs from multithreading are often insanely difficult to reproduce and debug. Rust’s safety guardrails make many of these bugs impossible.

This is also great for performance. C++ libraries have to decide whether it’s better to be thread safe (at a cost of performance) or to be thread-unsafe but faster. Lots of libraries are thread safe “just in case”. And you pay for this even when your program / variable is single threaded. In rust, because the compiler prevents these bugs, libraries are free to be non-threadsafe for better performance if they want - without worrying about downstream bugs.

replies(1): >>43606061 #
19. josephg ◴[] No.43605126{5}[source]
The recent postgresql sql injection bug was similar. It happened because nobody was checking if a UTF8 string was valid. Postgres’s protections against sql injection assumed that whatever software passed it a query string had already checked that the string was valid UTF8 - but in some languages, this check was never being performed.

This sort of bug is still possible in rust. (Although this particular bug is probably impossible - since safe rust checks UTF8 string validity at the point of creation).

This is one article about it - there was a better write up somewhere but I can’t find it now: https://www.rapid7.com/blog/post/2025/02/13/cve-2025-1094-po...

Rust’s static memory protection does still protect you against most RCE bugs. Most is not all. But that’s still a massive reduction in security vulnerabilities compared to C or C++.

20. FreakLegion ◴[] No.43605170{5}[source]
In fact "exclusively" doesn't belong in the statement at all. A very small number of successful RCE attacks use exploits at all, and of those, most target (often simple command) injection vulnerabilities like Log4Shell.

If you think back to the big breaches over the last five years, though -- SolarWinds, Colonial Pipeline, Uber, Okta (and through them Cloudflare), Change Healthcare, etc. -- all of these were basic account takeovers.

To the extent that anyone has to choose between investing in "safe" code and investing in IT hygiene, the correct answer today is IT hygiene.

replies(1): >>43605676 #
21. pkhuong ◴[] No.43605171{4}[source]
Someone tell that to the standard library. No memory safety involved in non-zero numbers https://doc.rust-lang.org/std/num/struct.NonZero.html#tymeth...
replies(1): >>43605264 #
22. an_ko ◴[] No.43605178{5}[source]
It sounds like you should read the docs. It's just a subject-specific abbreviation, not an advertising trick.
replies(1): >>43605564 #
23. ampere22 ◴[] No.43605244{4}[source]
Thank you for sharing. Seems I still have more to learn!

It seems the bug you are flagging here is a null reference bug - I know Rust has Optional as a workaround for “null”

Are there any pitfalls in Rust when Optional does not return anything? Or does Optional close this bug altogether? I saw Optional pop up in Java to quiet down complaints on null pointer bugs but remained skeptical whether or not it was better to design around the fact that there could be the absence of “something” coming into existence when it should have been initialized

replies(3): >>43605404 #>>43606020 #>>43612881 #
24. whytevuhuni ◴[] No.43605264{5}[source]
There is, since the zero is used as a niche value optimisation for enums, so that Option<NonZero<u32>> occupies the same amount of memory as u32.

But this can be used with other enums too, and in those cases, having a zero NonZero would essentially transmute the enum into an unexpected variant, which may cause an invariant to break, thus potentially causing memory unsafety in whatever required that invariant.

replies(1): >>43605313 #
25. Aurornis ◴[] No.43605297{3}[source]
I see this subculture far more in online forums than with fellow Rust developers.

Most often, the comments come from people who don’t even write much Rust. They either know just enough to be dangerous or they write other languages and feel like it’s a “gotcha” they can use against Rust.

26. zozbot234 ◴[] No.43605313{6}[source]
> which may cause an invariant to break, thus potentially causing memory unsafety in whatever required that invariant

By that standard anything and everything might be tainted as "unsafe", which is precisely GP's point. Whether the unsafety should be blamed on the outside code that's allowed to create a 0-valued NonZero<…> or on the code that requires this purported invariant in the first place is ultimately a matter of judgment, that people may freely disagree about.

replies(3): >>43606286 #>>43607183 #>>43612667 #
27. int_19h ◴[] No.43605386{4}[source]
The standard library doesn't give you a regular pointer, though (unless you specifically ask for that). It gives you an iterator, which is pointer-like, but exists precisely so that other behaviors can be layered. There's no reason why such an iterator can't do bounds checking etc, and, indeed, in most C++ implementations around, iterators do make such checks in debug builds.

The problem, rather, is that there's no implementation of checked iterators that's fast enough for release build. That's largely a culture issue in C++ land; it could totally be done.

replies(1): >>43608657 #
28. int_19h ◴[] No.43605404{5}[source]
It's not so much Optional that deals with the bug. It's the fact that you can't just use a value that could possibly be null in a way that would break at runtime if it is null - the type system won't allow you, forcing an explicit check. Different languages do this in different ways - e.g. in C# and TypeScript you still have null, but references are designated as nullable or non-nullable - and an explicit comparison to null changes the type of the corresponding variable to indicate that it's not null.
replies(1): >>43606485 #
29. Xylakant ◴[] No.43605450{3}[source]
Rust also provides guarantees that goe beyond mere memory safety. You get data-race safety as well, which avoids certain kinds of concurrency issues. You also get type-safety which is a step up when it comes to parsing untrusted input, at least compared to C for example. If untrusted inout can be parsed into your expected type system, it's more likely to not cause harm by confusing the program about what's in the variables. Rust doesn't straight up eliminate all source of error, but it makes major strides forward in areas that go beying mere memory safety.
30. arccy ◴[] No.43605564{6}[source]
but it is false advertising when it's used all over the internet with: rust is safe! telling the whole world to rtfm for your co-opting of the generic word "safe" is like advertisers telling you to read the fine print: a sleazy tactic.
replies(1): >>43607855 #
31. surajrmal ◴[] No.43605676{6}[source]
Can you back up your 'very small number " with some data? I don't think it lines up with my own experience here. It's really not an either or matter. Good security requires a multifaceted approach. Memory safety is definitely a worthwhile investment.
replies(1): >>43606732 #
32. antonvs ◴[] No.43605986[source]
> This regularly drives C++ programmers mad

I thought the C++ language did that.

replies(1): >>43606483 #
33. ddulaney ◴[] No.43606020{5}[source]
Rust’s Optional does close this altogether, yes. All (non-unsafe) users of Optional are required to have some defined behavior in both cases. This is enforced by the language in the match statement, and most of the “member functions” on Optional use match under the hood.

This is an issue with the C++ standardization process as much as with the language itself. AIUI when std::optional (and std::variant, which has similar issues) were defined, there was a push to get new syntax into the language itself that would’ve been similar to Rust’s match statement.

However, that never made it through the standardization process, so we ended up with “library variants” that are not safe in all circumstances.

Here’s one of the papers from that time, though there are many others arguing different sides: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p00...

34. spookie ◴[] No.43606061{5}[source]
I've written some multithreaded rust and I've gotta say, this does not reflect my experience. It's just as easy to make a mess, as in any other language.
replies(2): >>43606447 #>>43609320 #
35. genrilz ◴[] No.43606286{7}[source]
EDIT: A summary of this is that it is impossible to write a sound std::Vec implementation if NonZero::new_unchecked is a safe function. This is specifically because creating a value of NonZero which is 0 is undefined behavior which is exploited by niche optimization. If you created your own `struct MyNonZero(u8)`, then you wouldn't need to mark MyNonZero::new_unchecked as unsafe because creating MyNonZero(0) is a "valid" value which doesn't trigger undefined behavior.

The issue is that this could potentially allow creating a struct whose invariants are broken in safe rust. This breaks encapsulation, which means modules which use unsafe code (like `std::vec`) have no way to stop safe code from calling them with the invariants they rely on for safety broken. Let me give an example starting with an enum definition:

  // Assume std::vec has this definition
  struct Vec<T> {
    capacity: usize,
    length:   usize,
    arena:    * T
  }
  
  enum Example {
    First {
      capacity: usize,
      length:   usize,
      arena:    usize,
      discriminator: NonZero<u8>
    },
    Second {
      vec: Vec<u8>
    }
  }
Now assume the compiler has used niche optimization so that if the byte corresponding to `discriminator` is 0, then the enum is `Example::Second`, while if the byte corresponding to `discriminator` is not 0, then the enum is `Example::First` with discriminator being equal to its given non-zero value. Furthermore, assume that `Example::First`'s `capacity`, `length`, and `arena` fields are in the in the same position as the fields of the same name for `Example::Second.vec`. If we allow `fn NonZero::new_unchecked(u8) -> NonZero<u8>` to be a safe function, we can create an invalid Vec:

  fn main() {
    let evil = NonZero::new_unchecked(0);
  
    // We write as an Example::First,
    // but this is read as an Example::Second
    // because discriminator == 0 and niche optimization
    let first = Example::First {
      capacity: 9001, length: 9001,
      arena: 0x20202020,
      discriminator: evil
    }

    if let Example::Second{ vec: bad_vec } = first {
      // If the layout of Example is as I described,
      // and no optimizations occur, we should end up in here.

      // This writes 255 to address 0x20202020
      bad_vec[0] = 255;
    }
  }
So if we allowed new_unchecked to be safe, then it would be impossible to write a sound definition of Vec.
36. josephg ◴[] No.43606447{6}[source]
Me too. I agree that its not a bed of roses - and all the memory safety guarantees in the world don't stop you from making a huge mess. But I haven't run into any of the impossible-to-debug crashes / heisenbugs in my multithreaded rust code that I have in C/C++.

I think rust delivers on its safety promise.

replies(1): >>43608646 #
37. felbane ◴[] No.43606483[source]
It certainly used to, but tbh C++ since 17 has been pretty decent and continually improving.

That said, I still prefer to use it only where necessary.

38. tialaramex ◴[] No.43606485{6}[source]
I think sum types in general and Option<T> in particular is nicer. But the reason C# has nullability isn't that they disagree with me, it's that fundamentally the CLR has the same model as Java, all these types can be null, even though in the modern C# language you can say "No, not null that's never OK" at runtime on the CLR too bad maybe it's null.

For example if I write a C# function which takes a Goose, specifically a Goose, not a Goose? or similar - well, too bad the CLR says my C# function can be called by this obsolete BASIC code which has no idea what a Goose is, but it's OK because it passed null. If my code can't cope with a null? Too bad, runtime exception.

In real C# apps written by an in-house team this isn't an issue, Ollie may not be the world's best programmer but he's not going to figure out how to explicity call this API with a null, he's going to be stopped by the C# compiler diagnostic saying it needs a Goose, and worst case he says "Hey tialaramex, why do I need a Goose?". But if you make stuff that's used by people you've never met it can be an issue.

replies(1): >>43607058 #
39. Guthur ◴[] No.43606488{4}[source]
Because of cult like belief structures growing up around rust, it's clear as day for us on the outside, I see it from the evangelists in the company I work for "rust is faster and safer to develop with when compared to c++", I'm no c++ fan but it's obviously nonsense.

I feel people took the comparison of rust to c and extrapolated to c++ which is blatantly disingenuous.

replies(2): >>43607202 #>>43608104 #
40. FreakLegion ◴[] No.43606732{7}[source]
What do you count as data? I can keep naming big breaches that didn't involve exploits, like the Caesars and MGM ransomware attacks, or Russia getting deep into Microsoft. There aren't good public data sets, though.

As an example of a bad data set for this conversation, the vast majority of published CVEs have never been used by an attacker. CISA's KEVs give a rough gauge of this, with a little north of 1300 since 2021, and that includes older CVEs that are still in use, like EternalBlue. Some people point to the cardinality of CVE databases as evidence of something, but that doesn't hold up to scrutiny of actual attacks. And this is all before filtering down to memory safety RCE CVEs.

Probably the closest thing to a usable data set here would be reports from incident response teams like Verizon's, but their data is of course heavily biased towards the kinds of incidents that require calling in incident response teams. Last year they tagged something like 15% of breaches as using exploits, and even that is a wild overestimate.

> Memory safety is definitely a worthwhile investment.

In a vacuum, sure, but Python, Java, Go, C#, and most other popular languages are already memory safe. How much software is actively being written in unsafe languages? Back in atmosphere, there's way more value in first making sure all of your VPNs have MFA enabled, nobody's using weak or pwned passwords, employee accounts are deactivated when they leave the company, your help desk has processes to prevent being social engineered, and so on.

replies(1): >>43607918 #
41. dwattttt ◴[] No.43607058{7}[source]
> For example if I write a C# function which takes a Goose, specifically a Goose, not a Goose? or similar - well, too bad the CLR says my C# function can be called by this obsolete BASIC code which has no idea what a Goose is, but it's OK because it passed null. If my code can't cope with a null? Too bad, runtime exception.

That's actually no different to Rust still; if you try, you can pass a 0 value to a function that only accepts a reference (i.e. a non-zero pointer), be it by unsafe, or by assembly, or whatever.

Disagreeing with another comment on this thread, this isn't a matter of judgement around "who's bug is it? Should the callee check for null, or the caller?". Rust's win is by clearly articulating that the API takes non-zero, so the caller is buggy.

As you mention it can still be an issue, but there should be no uncertainty around who's mistake it is.

replies(2): >>43607931 #>>43613809 #
42. rcxdude ◴[] No.43607183{7}[source]
Yeah, anything can (and should) be marked unsafe if it could lead to memory safety problems. And so if it potentially breaks an invariant which is relied on for memory safety, it should be marked unsafe (conversely, code should not rely on an unchecked, safe condition for memory safety). That's basically how it works, Rust has the concept of unsafe functions so that libraries can communicate to users about what can and can't be relied on to keep memory safety without manual checking. This requires a common definition of 'safe', but it then means there isn't any argument about where the bug is: if the invariant isn't enforced by the compiler in safe code, then other code should not rely on it. If it is, then the bug is in the unsafe code that broke the invariant.
43. rcxdude ◴[] No.43607202{5}[source]
Care to explain the obvious, then? Rust is quite a lot nicer to write than C++ in my experience (and in fact, it seems like rust is most attractive to people who were already writing C++: people who still prefer C are a lot less likely to like Rust).
replies(1): >>43607234 #
44. Guthur ◴[] No.43607234{6}[source]
There is nothing attractive about c++ or rust, I really don't understand how anyone can think so, it has to be some sort of Stockholm syndrome. Think about it, before you started programming what about your experiences would make you appreciate the syntax soup of rust and c++?
replies(1): >>43607274 #
45. rcxdude ◴[] No.43607240{3}[source]
To add on another pitfall: iterator invalidation. In C++ you generally aren't allowed to modify a container while you're iterating through it, because it may re-allocate the memory and leave dangling pointers in the iterator, but the compiler doesn't check this. Rust's lifetime analysis closes this particular issue.

(Basically, the 'newer' C++ features do help a little with memory safety, but it's still fairly easy to trip up even if you restrict your own code from 'dangerous' operations. It's not at all obvious that a useful memory-safe subset of C++ exists. Even if you were to re-write the standard library to correct previous mistakes, it seems likely you would still need something like the borrow checker once you step beyond the surface level).

46. rcxdude ◴[] No.43607274{7}[source]
I dunno, there's not much about my previous experience that would indicate much one way or the other. I have found, though, that I tend to prefer slightly denser, heterogeneous code and syntax than average. Low-syntax languages like Haskell and Lisps make my head hurt because the code is so formless it becomes hard for me to parse, while languages with more syntax and symbols are easier (though, there is a limit, APL,k, etc, are a little far I find)
47. goku12 ◴[] No.43607855{7}[source]
It's not that either, and you are validating the GP's point. Rust has a very specific 'unsafe' keyword that every Rust developer interpret implicitly and instinctively as 'potentially memory-unsafe'. Consequently, 'safe' is interpreted as the opposite - 'guaranteed memory-safe'. Using that word as an abbreviation among Rust developers is therefore not uncommon.

However while speaking about Rust language in general, all half-decent Rust developers specify that it's about memory safety. Even the Rust language homepage has only two instances of the word - 'memory-safety' and 'thread-safety'. The accusations of sleaziness and false accusations is disingenuous at best.

48. thayne ◴[] No.43607918{8}[source]
> How much software is actively being written in unsafe languages?

Well, let's see. Most major operating system kernels for starters. Web browsers. OpenSSL. Web servers/proxies like Apache, Nginx, HAProxy, IIS, etc. GUI frameworks like Gtk, Qt, parts of Flutter. And so on.

49. ◴[] No.43607931{8}[source]
50. goku12 ◴[] No.43608104{5}[source]
The cult that I see growing online a lot are those who are invested in attacking Rust for some reason, though their arguments often indicate that they haven't even tried it. I believe that we're focusing so much on Rust evangelists that we're neglecting the other end of the zealotry spectrum - the irrational haters.

The Rust developers I meet are more interested in showing off their creations than in evangelizing the language. Even those on dedicated Rust forums are generally very receptive to other languages - you can see that in action on topics like goreleaser or Zig's comptime.

And while you have already dismissed the other commenter's experience of finding Rust nicer than C++ to program in, I would like to add that I share their experience. I have nothing against C++, and I would like to relearn it so that I can contribute to some projects I like. But the reason why I started with Rust in 2013 was because of the memory-saftey issues I was facing with C++. There are features in Rust that I find surprisingly pleasant, even with 6 additional years of experience in Python. Your opinion that Rust is unpleasant to the programmer is not universal and its detractions are not nonsense.

I appreciate the difficulty in learning Rust - especially getting past the stage of fighting the borrow checker. That's the reason why I don't promote Rust for immediate projects. However, I feel that the knowledge required to get past that stage is essential even for correct C and C++. Rust was easy for me to get started in, because of my background in digital electronics, C and C++. But once you get past that peak, Rust is full of very elegant abstractions that are similar to what's seen in Python. I know it works because I have trained js and python developers in Rust. And their feedback corroborates those assumptions about learning Rust.

51. goku12 ◴[] No.43608258{3}[source]
I agree with most of your assertions.

> ... with all the other features that create the "if it compiles it probably works" experience

While it's true that Rust's core safety feature is almost exclusively about memory safety, I think it contributes more to the overall safety of the program.

My professional background is more in electronics than in software. So when the Rust borrow checker complains, I tend to map them to nuances of the hardware and seek work-arounds for those problems. Those work-arounds often tend to be better restructuring of the code, with proper data isolation. While that may seem like hard work in the beginning, it's often better towards the end because of clarity and modularity it contributes to the code.

Rust won't eliminate logical bugs or runtime bugs from careless coding. But it does encourage better coding practices. In addition, the strict, but expressive type system eliminates more bugs by encoding some extra constraints that are verified at compile time. (Yes, there are other languages that do this better).

And while it is not guaranteed, I find Rust programs to just work if it compiles, more often than in the other languages I know. And the memory-safety system has a huge role in that experience.

52. pjmlp ◴[] No.43608646{7}[source]
Most likely because it all multi-threaded code access in-memory data structures, internal to the process memory, the only scenario in multi-threaded systems that Rust has some support for.

Make those threads access external resources simultaneously, or memory mapped to external writers, and there is no support from Rust type system.

replies(2): >>43609957 #>>43616250 #
53. pjmlp ◴[] No.43608657{5}[source]
VC++ checked iterators are fast enough for my use cases, not everyone is trying to win a F1 race when having to deal with C++ written code.
54. pjmlp ◴[] No.43608672{4}[source]
Many also need to learn that there are configuration settings on their compilers that make those two cases the same, enabling bounds checking on operator[]().
replies(1): >>43610249 #
55. ViewTrick1002 ◴[] No.43609320{6}[source]
Safe rust prevents you from writing data races. All concurrent access is forced to be guarded by synchronization primitives. Eliminating an entire class of bugs.

You can still create a mess from logical race conditions, deadlocks and similar bugs, but you won’t get segfaults because you after the tenth iteration forgot to diligently manage the mutex.

Personally I feel that in rust I can mostly reason locally, compared to say Go when I need to understand a global context whenever I touch multithreaded code.

56. sksxihve ◴[] No.43609957{8}[source]
What mainstream language has type system features that make multi-threaded access to external resources safe?

Managing something like that is a design decision of the software being implemented not a responsibility of the language itself.

replies(1): >>43610456 #
57. criddell ◴[] No.43610249{5}[source]
Sure, but at() is guaranteed to throw an exception and operator[] can throw an exception when you go out of bounds. C++26 is tweaking this, but it's still going to differ implementation to implementation.

At least that's my understanding of the situation. Happy to be corrected though.

58. pjmlp ◴[] No.43610456{9}[source]
None, however the fearless concurrency sales pitch usually leaves that scenario as footnote.
59. steveklabnik ◴[] No.43612667{7}[source]
> Whether the unsafety should be blamed on the outside code that's allowed to create a 0-valued NonZero<…> or on the code that requires this purported invariant in the first place is ultimately a matter of judgment, that people may freely disagree about.

It's not, though. NonZero<T> has an invariant that a zero value is undefined behavior. Therefore, any API which allows for the ability to create one must be unsafe. This is a very straightforward case.

60. steveklabnik ◴[] No.43612736{3}[source]
Here's a program that uses only std::unique_ptr:

  #include<iostream>
  #include<memory>
  
  int main() {

      std::unique_ptr<int> null_ptr;
    
      std::cout << *null_ptr << std::endl; // Undefined behavior
  }
Clang 20 compiles this code with `-std=c++23 -Wall -Werror`. If you add -fsanitize=undefined, it will print

  ==1==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55589736d8ea bp 0x7ffe04a94920 sp 0x7ffe04a948d0 T1)
or similar.
61. steveklabnik ◴[] No.43612881{5}[source]
> whether or not it was better to design around the fact that there could be the absence of “something” coming into existence when it should have been initialized

So this is actually why "no null, but optional types" is such a nice spot in the programming language design space. Because by default, you are making sure it "should have been initialized," that is, in Rust:

  struct Point {
      x: i32,
      y: i32,
  }
You know that x and y can never be null. You can't construct a Point without those numbers existing.

By contrast, here's a point where they could be:

  struct Point {
      x: Option<i32>,
      y: Option<i32>,
  }
You know by looking at the type if it's ever possible for something to be missing or not.

> Are there any pitfalls in Rust when Optional does not return anything?

So, Rust will require you to handle both cases. For example:

    let x: Option<i32> = Some(5); // adding the type for clarity

    dbg!(x + 7); // try to debug print the result
This will give you a compile-time error:

     error[E0369]: cannot add `{integer}` to `Option<i32>`
       --> src/main.rs:4:12
        |
    4   |     dbg!(x + 7); // try to debug print the result
        |          - ^ - {integer}
        |          |
        |          Option<i32>
        |
    note: the foreign item type `Option<i32>` doesn't implement `Add<{integer}>`
It's not so much "pitfalls" exactly, but you can choose to do the same thing you'd get in a language with null: you can choose not to handle that case:

    let x: Option<i32> = Some(5); // adding the type for clarity
    
    let result = match x {
        Some(num) => num + 7,
        None => panic!("we don't have a number"),
    };

    dbg!(result); // try to debug print the result
This will successfully print, but if we change `x` to `None`, we'll get a panic, and our current thread dies.

Because this pattern is useful, there's a method on Option called `unwrap()` that does this:

  let result = x.unwrap();
And so, you can argue that Rust doesn't truly force you to do something different here. It forces you to make an active choice, to handle it or not to handle it, and in what way. Another option, for example, is to return a default value. Here it is written out, and then with the convenience method:

    let result = match x {
        Some(num) => num + 7,
        None => 0,
    };

  let result = x.unwrap_or(0);
And you have other choices, too. These are just two examples.

--------------

But to go back to the type thing for a bit, knowing statically you don't have any nulls allows you to do what some dynamic language fans call "confident coding," that is, you don't always need to be checking if something is null: you already know it isn't! This makes code more clear, and more robust.

If you take this strategy to its logical end, you arrive at "parse, don't validate," which uses Haskell examples but applies here too: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

62. int_19h ◴[] No.43613809{8}[source]
The difference is that C# has well-defined behavior in this case - a non-nullable notification is really "not-nullable-ish", and there are cases even in the language itself where code without any casts in it will observe null values of such types. It's just a type system hole they allow for convenience and back-compat.

OTOH with Rust you'd have to violate its safety guarantees, which if I understand correctly triggers UB.

replies(1): >>43614953 #
63. steveklabnik ◴[] No.43614953{9}[source]
> which if I understand correctly triggers UB.

Yes, your parent's example would be UB, and require unsafe.

64. josephg ◴[] No.43616250{8}[source]
> Make those threads access external resources simultaneously, or memory mapped to external writers, and there is no support from Rust type system.

I don’t think that’s true.

External thread-unsafe resources like that are similar in a way to external C libraries: they’re sort of unsafe by default. It’s possible to misuse them to violate rust’s safe memory guarantees. But it’s usually also possible to create safe struct / API wrappers around them which prevent misuse from safe code. If you model an external, thread-unsafe resource as a struct that isn’t Send / Sync then you’re forced to use the appropriate threading primitives to interact with the resource from multiple threads. When you use it like that, the type system can be a great help. I think the same trick can often be done for memory mapped resources - but it might come down to the specifics.

If you disagree, I’d love to see an example.

replies(1): >>43618901 #
65. pjmlp ◴[] No.43618901{9}[source]
Shared memory, shared files, hardware DMA, shared database connections to the same database.

You can control safety as much as you feel like from Rust side, there is no way to validate that the data coming into the process memory doesn't get corrupted by the other side, while it is being read from Rust side.

Unless access is built in a way that all parties accessing the resource have to play by the same validation rules before writting into it, OS IPC resources like shared mutexes, semaphores, critical section.

The kind of typical readers-writers algorithms in distributed computing.