Perhaps it is faster than already-existing implementations, sure, but not "faster than C", and it is odd to make such claims.
Also I'm pretty sure that the C implementation had more man hours put into it than the Rust one.
I think there's lots of value in wrapping a raw/unsafe implementation with a rust API, but that's not quite what most people think of when writing code "in rust".
zlib-ng can be compiled to whatever target arch is necessary, and the original post doesn't mention how it was compiled and what architecture and so on.
It's another case not to trust micro benchmarks
Unsafe Rust still has to conform to many of Rust’s rules. It is meaningfully different than C.
unsafe {
let x_tmp0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x10);
xmm_crc0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x01);
xmm_crc1 = _mm_xor_si128(xmm_crc1, x_tmp0);
xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_crc0);
Kidding aside, I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library. At what point does it really stop mattering if this is C or Rust?Presumably with inline assembly both languages can emit what is effectively the same machine code. Is the Rust compiler a better optimizing compiler than C compilers?
Fortunately these “which language is best” SLOC measuring contests are just frivolous little things that only silly people take seriously.
The Rust compiler is indeed better than the C one, largely because of having more information and doing full-program optimisation. A `vec_foo = vec_foo.into_iter().map(...).collect::Vec<foo>`, for example, isn't going to do any bounds checks or allocate.
Which is exactly the point, other languages have unsafe implicitly sprinkled in every single line.
Rust tries to bound and explicitly delimit where unsafe code is to makes review and verification efforts precise.
In addition, unsafe does not mean the code inside the
block is necessarily dangerous or that it will definitely
have memory safety problems: the intent is that as the
programmer, you’ll ensure the code inside an unsafe block
will access memory in a valid way.
Since you say you already know that much Rust, you can be that programmer!In good practice it’s used judiciously in a codebase where it makes sense. Those sections receive extra attention and analysis by the developers.
Of course you can find sloppy codebases where people reach for unsafe as a way to get around Rust instead of writing code the Rust way, but that’s not the intent.
You can also find die-hard Rust users who think unsafe should never be used and make a point to avoid libraries that use it, but that’s excessive.
rustc uses LLVM just as clang does, so to a first approximation they're the same. For any given LLVM IR you can mostly write equivalent Rust and C++ that causes the respective compiler to emit it (the switch fallthrough thing mentioned in the article is interesting though!) So if you're talking about what's possible (as opposed to what's idiomatic), the question of "which language is faster" isn't very interesting.
> isn't going to do any bounds checks or allocate.
You need to add explicit bounds check or explicitly allocate in C though. It is not there if you do not add it yourself.
That depends. If, for you, safety is something relative and imperfect rather than absolute, guaranteed and reliable, then - the answer is that once you have the first non-trivial unsafe block that has not gotten standard-library-level of scrutiny. But if that's your view, you should not be all that starry-eyed about how "Rust is a safe language!" to begin with.
On the other hand, if you really do want to rely on Rust's strong safety guarantees, then the answer is: From the moment you use any library with unsafe code.
My 2 cents, anyway.
If you smell it when you're not working on the gas lines, that's a signal.
zlib itself seems pretty antiquated/outdated these days, but it does remain popular, even as a basis for newer parallel-friendly formats such as https://www.htslib.org/doc/bgzip.html
In safe Rust (the default), memory access is validated by the borrow checker and type system. Rust’s goal of soundness means safe Rust should never cause out-of-bounds access, use-after-free, etc; if it does, then there's a bug in the Rust compiler.
But even then, your code is calling out to kernel functions which are probably written in C or assembly, and therefore "dangerous."
Rust code safety is overhyped frequently, but reducing an attack surface is still an improvement over not doing so.
It tends to be found in drivers, kernels, vector code, and low-level implementations of data structures and allocators and similar things. Not typical application code.
As a general rule it should be avoided unless there's a good reason to do it. But it's there for a reason. It's almost impossible to create a systems language that imposes any kind of rules (like ownership etc.) that covers all possible cases and all possible optimization patterns on all hardware.
The impression a naive reader might take is that idiomatic/safe/best-practices Rust has now closed the performance gap. But clearly that's not happening here.
It's like letting a wet dog (who'd just been swimming in a nearby swamp) run loose inside your hermetically sealed cleanroom.
I also wonder how much of an improvement you’d get by just asking for a “simple rewrite” in the existing language. I suspect there are often performance improvements to be had with simple changes in the existing language
Some reading: https://jolynch.github.io/posts/use_fast_data_algorithms/
(As an aside, at my last job container pushes / pulls were in the development critical path for a lot of workflows. It turns out that sha256 and gzip are responsible for a lot of the time spent during container startup. Fortunately, Zstandard is allowed, and blake3 digests will be allowed soon.)
Poorly-written unsafe code can have effects extending out into safe code. But correctly-written unsafe code does not have any effects on safe code w.r.t. memory safety. So to ensure memory safety, you just have to verify the correctness of the unsafe code (and any helper functions, etc., it depends on), rather than the entire codebase.
Also, some forms of unsafe code are far less dangeous than others in practice. E.g., most of the SIMD functions are practically safe to call in every situation, but they all have 'unsafe' slapped on them due to being intrinsics.
> You need to add explicit bounds check or explicitly allocate in C though. It is not there if you do not add it yourself.
Unfortunately, you do need to allocate a new buffer in C if you change the type of the elements. The annoying side of strict aliasing is that every buffer has a single type that's set in stone for all time. (Unless you preemptively use unions for everything.)
Ah yes, C++ is just one safety feature away from replacing Rust, surely, any moment now. The bizzare world C++ fanboys live in.
Every single person that had been writing C++ for a while and isn't a victim of Stockholm syndrome would be happy when C++ is put to bed once and for all. It's a horrible language only genuinely enjoyed by bad programmers.
"You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
Dereference a raw pointer
Call an unsafe function or method
Access or modify a mutable static variable
Implement an unsafe trait
Access fields of a union
It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You’ll still get some degree of safety inside of an unsafe block.In addition, unsafe does not mean the code inside the block is necessarily dangerous or that it will definitely have memory safety problems: the intent is that as the programmer, you’ll ensure the code inside an unsafe block will access memory in a valid way.
People are fallible, and mistakes will happen, but by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."
No need to get all moral about it.
5-15% is a big deal for a low-level foundational code, especially if you get it along with some other guarantees, which may be of greater importance.
Unsafe code is not inherently faster than safe code, though sometimes, it is. Unsafe is for when you want to do something that is legal, but the compiler cannot understand that it is legal.
However, if you verify that the unsafe blocks are correct, and the safe API wrapping them rejects invalid inputs, then they won't be able to cause unsafety anywhere.
This does reduce how much code you need to review for memory safety issues. Once it's encapsulated in a safe API, the compiler ensures it can't be broken.
This encapsulation also prevents combinatorial explosion of complexity when multiple (unsafe) libraries interact.
I can take zlib-rs, and some multi-threaded job executor (also unsafe internally), but I don't need to specifically check how these two interact. zlib-rs needs to ensure they use slices and lifetimes correctly, the threading library needs to ensure it uses correct lifetimes and type bounds, and then the compiler will check all interactions between these two libraries for me. That's like (M+N) complexity to deal with instead of (M*N).
https://en.wikipedia.org/wiki/Odorizer#Natural_gas_odorizers
"People are fallible, and mistakes will happen, but by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."
I hope the SIMD intrinsics make it to stable soon so folks can ditch unnecessary unsafes if that's the only issue.
https://doc.rust-lang.org/std/intrinsics/simd/index.html
So I suspect it's a matter of two things:
1. You're calling out to what's basically assembly, so buyer beware. This is basically FFI into C/asm.
2. There's no guarantee on what comes out of those 128-bit vectors after to follow any sanity or expectations, so... buyer beware. Same reason std::mem::transmute is marked unsafe.
It's really the weakest form of unsafe.
Still entirely within the bounds of a sane person to reason about.
The major reason that rust can be faster than C though, is because due to the way the compiler is constructed, you can lean on threading idiomatically. The same can be true for Go, coroutines vs no coroutines in some cases is going to be faster for the use case.
You can write these things to be the same speed or even faster in C, but you won’t, because it’s hard and you will introduce more bugs per KLOC in C with concurrency vs Go or Rust.
In other words, unsafe works if you use it carefully and keep it contained.
Richard Hipp denounces claims that SQLite is the widest-used piece of code in the world and offers zlib as a candidate for that title, which I believe he is entirely correct about. I’ve been consciously using it for almost thirty years, and for a few years before that without knowing I was.
As is the case with any languages, of course, it is not in favor (nor against) Rust.
So, in theory, unsafe rust opens the floodgates. In practice, though, you can use small fragments of unsafe code that programmers can fairly easily check to be safe.
Then, once you’ve convinced yourself that those fragments are safe, you can be assured that your whole program is safe (using ‘safe’ in the rust sense, of course)
So, there may be some small islands of unsafe code that require extra attention from the programmer, but that should be just a tiny fraction of all lines, and you should be able to verify those islands in isolation.
For more information: https://news.ycombinator.com/item?id=43382176
It's due to a couple of different things interacting with each other: unsafe relies on invariants that safe code must also uphold, and that the privacy boundary in Rust is the module.
Before we get into the unsafe stuff, I want you to consider an example. Is this Rust code okay?
struct Foo {
bar: usize,
}
impl Foo {
fn set_bar(&mut self, bar: usize) {
self.bar = bar;
}
}
No unsafe shenanigans here. This code is perfectly safe, if a bit useless.Let's talk about unsafe. The canonical example of unsafe code being affected outside of unsafe itself is the implementation of Vec<T>. Vecs look something like this (the real code is different for reasons that don't really matter in this context):
struct Vec<T> {
ptr: *mut T,
len: usize,
cap: usize,
}
The pointer is to a bunch of Ts in a row, the length is the current number of Ts that are valid, and the capacity is the total number of Ts. The length and the capacity are different so that memory allocation is amortized; the capacity is always greater than or equal to the length.That property is very important! If the length is greater than the capacity, when we try and index into the Vec, we'd be accessing random memory.
So now, this function, which is the same as Foo::set_bar, is no longer okay:
impl<T> Vec<T> {
fn set_len(&mut self, len: usize) {
self.len = len;
}
}
This is because the unsafe code inside of other methods of Vec<T> need to be able to rely on the fact that len <= capacity. And so you'll find that Vec<T>::set_len in Rust is marked as unsafe, even though it doesn't contain unsafe code. It still requires judicious use of to not introduce memory unsafety.And this is why the module being the privacy boundary matters: the only way to set len directly in safe Rust code is code within the same privacy boundary as the Vec<T> itself. And so, that's the same module, or its children.
* defined in C, undefined in Rust
* undefined in C, undefined in Rust
* defined in Rust, undefined in C
* defined in Rust, defined in C
So many very useful features of Rust and its core library spend years in "nightly" because the maintainers of those features don't have the discipline to see them through.
This is where the rubber hits the road. Rust does not allow you to do this, in the sense that this is possibly undefined behavior. That "possibly" is why the compiler allows you to write this code, because by saying "unsafe", you are promising that this specific arbitrary address is legal for you to write to. But that doesn't mean that it's always legal to do so.
For example, if the language is able to say, for any two pointers, the two pointers will not overlap - that would enable the backend to optimise further. In C this requires an explicit restrict keyword. In Rust, it’s the default.
By the way this isn’t theoretical. Image decoders written in Rust are faster than ones written in C, probably because the backend is able to autovectorise better. (https://www.reddit.com/r/rust/comments/1ha7uyi/memorysafe_pn...).
grep (C) is about 5-10x slower than ripgrep (Rust). That’s why ripgrep is used to execute all searches in VS Code and not grep.
Or a different tack. If you wrote a program that needed to sort data, the Rust version would probably be faster thanks to the standard library sort being the fastest, across languages (https://github.com/rust-lang/rust/pull/124032). Again, faster than C.
Happy to give more examples if you’re interested.
There’s nothing special about C that entitles it to the crown of “nothing faster”. This would have made sense in 2005, not 2025.
I don't have the personality or time to wade into committee type work, so I have no idea what it would take to get those two across the finish line, but the allocator one in particular makes me question Rust for lower level applications. I think it's just not going to happen.
If Zig had proper ADTs and something equivalent to borrow checker, I'd be inclined to poke at it more.
Of safe SIMD, but some stuff in core::arch is stabilized. Here's the first bit called in the example of the OP: https://doc.rust-lang.org/core/arch/x86/fn._mm_clmulepi64_si...
To continue the analogy of the dog, you let the dog get wet (=you use unsafe), but you put a cleaning room (=the sound and safe API) before your sealed room (=the safe code world)
In fact, it has already been merged two weeks ago: https://github.com/rust-lang/stdarch/pull/1714
The change is already visible on nightly: https://doc.rust-lang.org/nightly/core/arch/x86/fn._mm_xor_s...
Compared to stable: https://doc.rust-lang.org/core/arch/x86/fn._mm_xor_si128.htm...
So this should be stable in 1.87 on May 15 (Rust's 10 year anniversary since 1.0)
At the same time, unsafe doesn't just turn off all compiler checks, it just gives you tools to go around them, as well as tools that happen to go around them because of the way they work. Rust unsafe is this weird mix of being safer than pure C, but harder to grasp; with lots of nuanced invariants you have to uphold. If you want to ensure your code still has all the nice properties the compiler guarantees (which go way beyond memory safety) you would have to carefully examine every unsafe block. Which few people do, but you generally still end up with a better status quo than C/C++ where any code can in principle break properties other code was trying to uphold.
libdeflate is not zlib compatible. It doesn't support streaming decompression.
libdeflate is an impressive library, but it doesn't help if you need to stream data rather than having it all in memory at once.
First, I would say that "ripgrep is generally faster than GNU grep" is a true statement. But sometimes GNU grep is faster than ripgrep and in many cases, performance is comparable or only a "little" slower than ripgrep.
Secondly, VS Code using ripgrep because of its speed is only one piece of the picture. Licensing was also a major consideration. There is an issue about this where they originally considered ripgrep (and ag if I recall correctly), but I'm on mobile so I don't have the link handy.
In practice (in both languages) you check what the actual unsafe code does (or "all" code in C's case), note code that depends on external actors for safety (it's not all C code, nor is it all unsafe Rust blocks), and check their callers (and callers callers, etc).
But using ordinary module encapsulation and private fields, you can scope the code that needs to uphold those preconditions to a particular module.
So the "trusted computing base" for the unsafe code can still be scoped and limited, allowing you to reduce the amount of code you need to audit and be particularly careful about for upholding safety guarantees.
Basically, when writing unsafe code, the actual unsafe operations are scoped to only the unsafe blocks, and they have preconditions that you need to scope to a particular module boundary to ensure that there's a limited amount of code that needs to be audited to ensure it upholds all of the safety invariants.
Ralf Jung has written a number of good papers and blog posts on this topic.
All safe code in existence running on von Neumann architectures is built on a foundation of unsafe code. The goal of all memory-safe languages is to provide safe abstractions on top of an unsafe core.
There's even unsafe usage in the standard library and it's used a lot in embedded libraries.
Yep! For example, https://github.com/Speykious/cve-rs is an example of a bug in the Rust compiler, which allows something that it shouldn't. It's on its way to being fixed.
> or miss things no?
This is the trickier part! Yes, even proofs have axioms, that is, things that are accepted without proof, that the rest of the proof is built on top of. If an axiom is incorrect, so is the proof, even though we've proven it.
Normally in safe code you can’t violate the language rules because the compiler enforces various rules. In unsafe mode, you can do several things the compiler would normally prevent you from doing (e.g. dereferencing a naked pointer). If you uphold all the preconditions of the language, safety is preserved.
What’s unfortunate is that the rules you are required to uphold can be more complex than you might anticipate if you’re trying to use unsafe to write C-like code. What’s fortunate is that you rarely need to do this in normal code and in SIMD which is what the snippet is representing there’s not much danger of violating the rules.
You can contort C to trick it into being fast[1], but it quickly becomes an unmaintainable nightmare so almost nobody does.
1: eg, correct use of restrict, manually creating move semantics, manually creating small string optimizations, etc...
We will see more and more Rust libraries trounce their C counterparts in speed, because Rust is more fun to work in because of the above. Rust has democratized high-speed and concurrent systems programming. Projects in it will attract a larger, more diverse developer base -- developers who would be loath to touch a C code base for (very justified) fear of breaking something.
..at least outside of loads/stores. From a bit of looking at the code though it seems like a good amount of those should be doable in a safe way with some abstractions.
That's only true at the same level of scrutiny as "all C operations can cause undefined behaviour, regardless of what they are", which I find similarly shallow.
> The standard library will not deviate in naming or type signature of any intrinsic defined by an architecture.
I think this makes sense, just like any other intrinsic: unsafe to use directly, but with safe wrappers.
I believe that there are also some SIMD things that would have to inherently take raw pointers, as they work on pointers that aren't aligned, and/or otherwise not valid for references. In theory you could make only those take raw pointers, but I think the blanket policy of "follow upstream" is more important.
Unsafe code can be incorrect (or unsound), and needs to be careful about it. Part of being careful is that safe code can call the unsafe code in a way that triggers that unsoundness; in that way, safe code can cause undefined behaviour in unsafe code.
It's not always the case that this is possible; there are unsafe blocks that don't need to depend on safe code for its correctness.
Sure, you can technically just write your own vulnerability for your own program and inject it at an unsafe and see the whole world crumble... but the exact same is true for any form of FFI calls in any language. Is Java memory safe? Yeah, just because I can grab a random pointer and technically break anything I want won't change that.
The fact that a memory vulnerability error may either appear at no place at all OR at the couple hundred lines of code thorough the whole project is a night and day difference.
If I read TFA correctly, they came up with a library that is API compatible with the C one, but they've measured to be faster.
At that point I think in addition to safety benefits in other parts of the library (apart from unsafe micro optimizations as quoted), what they're leveraging is better compiler technology. Intuitively, I start to assume that the rust compiler can perhaps get away with more optimizations that might not be safe to assume in C.
And there is not many things we have statistics on in CS, but memory vulnerabilities being absolutely everywhere in unsafe languages, and Rust cleaning up the absolute majority of them even when only the new parts are written in Rust are some of the few we do know, based on actual, real life projects at Google/Microsoft among others.
A memory safe low-level language is as novel as it gets. Rust is absolutely not just hype, it actually delivers and you might want to get on with the times.
By the way, the rust compiler does generate such code because under the hood LLVM runs an autovectorizer when you turn on optimizations. However, for the autovectorizer to do a good job you have to write code in a very special way and you have no way of controlling whether or not it kicked in and once it did that it did a good job.
There’s work on creating safe abstractions (that also transparently scale to the appropriate vector instruction), but progress on that has felt slow to me personally and it’s not available outside nightly currently.
A simple example might be modifying a const value deep down in some class, where it only becomes apparent later in the program’s execution. Hence their analogy of the wet dog in a clean room - whatever beliefs you have about the structure of memory in your entire program, and guaranteed by the compiler, could have been undone by a rogue unsafe.
Rust encourages using unsafe to "teach" the language new design patterns and data structures; and uses this heavily in its standard library. For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.
Think of unsafe not as "this code is unsafe", but as "I've proven this code to be safe, and the borrow checker can rely on it to prove the safety of the rest of my program."
C code will go through a huge amounts of transformations by the compiler, and unless you are a compiler expert you will have no idea how the resulting code looks. It's not targeting the PDP-11 anymore.
Besides the famous "C is not a low-level language" blog post.. I don't even get what you are thinking. C is not even the performance queen for large programs (the de facto standard today is C++ for good reasons), let alone for tiny ultra hot loops like codecs and stuff, which are all hand-written assembly.
It's not even hard to beat C with something like Rust or C++, because you can properly do high level optimizations as the language is expressive enough for that.
When I started working in Rust, I'd want some feature or function, look it up, and find it was unstable, sometimes for years. This was frustrating at first, but then I'd go read the GitHub issue thread and find that there was some design or implementation concern that needed to be overcome, and that people were actively working on it and unwilling to stabilize the feature until they were sure it was the best possible design. And the result of that is that features that do get stabilized are well thought out, generalize, and compose well with everything else in the language.
Yes, I really want things like portable SIMD, allocators, generators, or Iterator::intersperse. But programming languages are the one place I really do want perfect to be the enemy of good. I'd rather it take 5+ years to stabilize features than for us to end up with another Swift or C++.
Not at all would that be valid.
C has a semantic model which was close to how early CPUs worked, but a lot has changed since. It's more like CPUs deliberately expose an API so that C programmers could feel at home, but stuff like SIMD and the like is non-existent in C besides as compiler extensions. But even just calling conventions, the stack, etc are all stuff you have no real control over in the C language, and a more optimal version of your code might want to do so. Sure, the compiler might be sufficiently smart, but then it might as well convert my Python script to that ultra-efficient machine code, right?
So no, you simply can't write everything in C, something like simd-json is just not possible. Can you put inline assembly into C? Yeah, but I can also call inline assembly from Scratch and JS, that's not C at all.
Also, Go is not even playing in the same ballpark as C/C++/Rust.
You can still use deflate for compression, but Brotli and Zstd have been available in all modern browsers for quite some time.
And there really aren't. The abbreviated/limited safety environment being exploited by this non-idiomatic Rust code seems to me to be basically isomorphic to the way you'd solve the problem in C.
Also, FWIW, that zippy Nim library has essentially zero CPU-specific optimizations that I could find. Maybe one tiny one in some checksumming bit. Optimization is specialization. So, I'd guess it's probably a little slower than zlib-ng now that this is pointed out, but as @hinkley observed, portability can also be a meaningful goal/axis.
Unsafe Rust is currently extremely underspecified and underdocumented, but it's designed to be far more specifiable than C. For example: aliasing rules. When and how you're allowed to alias references in unsafe code is not at all documented and under much active discussion; whereas in C pointer aliasing rules are well defined but also completely insane (casting pointers to a different type in order to reinterpret the bytes of an object is often UB even in completely innocuous cases).
Once Rust's memory model is fully specified and written down, unsafe Rust is trying to go for something much simpler, more teachable, and with less footguns than C.
Huge props to Ralf Jung and the opsem team who are working on answering these questions & creating a formal specification: https://github.com/rust-lang/unsafe-code-guidelines/issues
Rust does not have a specific "Rust runtime heap."
This isn't a wet dog in a cleanroom. This is cleanroom complex that has a very small outhouse that is labeled as dangerous.
Ah, so that was like, not in your comment, but in a parent.
> And there really aren't.
I mean, not all of the code is unsafe. From a cursory glance, there's surely way more here than I see in most Rust packages, but that doesn't mean that you get no advantages. I picked a random file, and chose some random code out of it, and see this:
pub fn copy<'a>(
dest: &mut MaybeUninit<DeflateStream<'a>>,
source: &mut DeflateStream<'a>,
) -> ReturnCode {
// SAFETY: source and dest are both mutable references, so guaranteed not to overlap.
// dest being a reference to maybe uninitialized memory makes a copy of 1 DeflateStream valid.
unsafe {
core::ptr::copy_nonoverlapping(source, dest.as_mut_ptr(), 1);
}
The semantics of safe code, `&mut T`, provide the justification for why the unsafe code is okay. Heck, this code wouldn't even be legal in C, thanks to strict aliasing. (Well, I guess you could argue that in C code they'd be of the same type, since you don't have "might be uninitialized" in C's typesystem, but again, this is an invariant encoded in the type system that C can't do, so it's not possible to express in C for that reason either.)An optimized version that controls allocations, has good memory access patterns, uses SIMD and uses multi-threading can easily be 100x faster or more. Better memory access alone can speed a program up 20x or more.
[1] FWIW, memcpy() arguments are declared restrict post-C99, the strict aliasing thing doesn't apply, for exactly the reason you're imagining.
Sorry but horrible comparison ;)
If you need to rely on unsafe in a memory-safe language for performance reasons, then there is a issue with the language compiler at that point, that needs to be fixed. Simple as that.
The whole memory-safety is the bread and butter of the language, the moment you start to bypass it for faster memory operations, you can start doing the same in any other language. I mean, your literally bypassing the main selling point of the language. \_00_/
From my past experiences with Rust, the team never had to think about data race once, or mutable volatile globals. And we all there suffered from those decades ago with C and sometimes C++ as well.
You like those and don't want to migrate? More power to ya! But badmouthing Rust with what seem fairly uninformed comments is just low. Inform yourself first.
No, not even close. You only lose Rust's safety guarantees when your unsafe code causes Undefined Behavior. Unsafe code that can be made to cause UB from Safe Rust is typically called unsound, and unsafe code that cannot be made to cause UB from Safe Rust is called sound. As long as your unsafe code is sound, then it does not break any of Rust's guarantees.
For example, unsafe code can still use slices or references provided by Safe Rust, because those are always guaranteed to be valid, even in an unsafe block. However, if from inside that unsafe block you then go on to manufacture an invalid slice or reference using unsafe functions, that is UB and you lose Rust's safety guarantees because of the UB.
It actually means "Rust needs to interface with many other systems that are not as stringent as it". Your interpretation has nothing to do with what's actually going on and I am surprised you misinterpreted the situation as hugely as you did.
...And even if everything was written in Rust, `unsafe` would still be needed because the lower you get [to the kernel] you get more and more non-determinism at places.
This "all or nothing" attitude is boring and tiring. We all wish things were super simple, black and white, and all-or-nothing. They are not.
Right, and in Rust, you don't have to do it yourself: the language does it for you. If the signature were in C, you'd have to analyze the callers to make sure that this property is upheld when invoked. In Rust, the compiler does that for you.
> the strict aliasing thing doesn't apply
Yes, this is the case in this specific instance due to it being literally memcpy, but if it were any other function with the same signature, the problem would exist. Again, I picked some code at random, I'm not saying this one specific instance is even the best one. The broader point of "Rust has a type system that lets you encode more invariants than C's" is still broadly true.
- Exposing an unsafe API and relying on the caller to manually uphold invariants
- Doing things like defensive copying at a performance cost
In many cases Rust gives you the best of both worlds: sharing memory liberally while still having the compiler enforce correctness.
Which is: people complaining about Rust zealots are much more than actual Rust zealots. Thinking of it, I haven't seen a proper Rust zealot on HN for at least a year at this point.
So I don't know, maybe do less cheap digs. Tearing down straw men is pretty boring to watch.
Once you can internalize that you could unlock the power of encapsulation.
Rust uses this strategy of minimal/incremental stabilization quite often (see also: const generics, impl Trait); the difference between this and what drove me away from Swift is that MVPs aren't shipped unless it's clear that the design choices being made now will still be the right choices when the rest of the feature is ready.
As soon as you start playing with FFI and raw pointers in Python, NodeJS, Julia, R, C#, etc you can easily loose the nice memory-safety properties of those languages - create undefined behavior, segfaults, etc. I'd say Rust is a lot nicer for checking unsafe correctness than other memory-safe languages, and also makes it easier to dip down to systems-level programming, yet it seems to get a lot of hate for these features.
Why define runtime independence as a goal, but then make it impossible to write runtime agnostic crates?
(Well, there's the "agnostic" crate at least now)
No it doesn't? That comment is expressing a human analysis. The compiler would allow you to stuff any pointer in that you want, even ones that overlap. You're right that some side effects of the runtime can be exploited to do that analysis. But that's true of C too! (Like, "these are two separate heap blocks", or "these are owned by two separate objects", etc...). Still human analysis.
Frankly you're overselling hard here. A human author can absolutely mess that analysis up, which is the whole reason Rust calls it "unsafe" to begin with.
I'm saying that even in a codebase with a lot of unsafe, the checks that are still performed have value.
The fact that the Rust maintainers allow people to put in half-baked features before they are fully designed is the biggest cultural failing of the language, IMO.
In nightly?
Hard disagree. Letting people try things out in the real world is how you avoid half-baked features. Easy availability of nightly compilers with unstable features allows way more people to get involved in the pre-stabilization polishing phase of things and raise practical concerns instead of theoretical ones.
C++ takes the approach of writing and nitpicking whitepapers for years before any implementations are ready and it's hard to see how that has led to better outcomes relatively speaking.
I am 100% sure that the smell they add to natural gas does not smell like rotten eggs.
Which library has fewer dependencies.
Is each library the same size. Which one is smaller.
1. Big library/compiler does a thing, and people really like it
2. Other compilers and libraries copy that thing, sometimes putting their own spin on it
3. All the kinks get worked out and they write a white paper
4. Eventually the thing becomes standard
That way, everything in the standard library is something that is fully-thought-out and feature-complete. It also gives much more room for competing implementations to be built and considered before someone stakes out a spot in the standard library for their thing.
I would argue that it's the opposite of a mistake. If you standardize everything before the ecosystem gets a chance to play with it, you risk making mistakes that you have to live with in perpetuity.
Until we design perfectly correct computer hardware, processors, and a sun which doesn't produce solar radiation, we can't rely on totally uniform correct execution of our code, so we should give up.
The reality is that while we can't prove the rust compiler is safe, we can keep using it and diligently fix any counter-examples, and that's good enough in practice. Over in the real world, where we can acknowledge "yes, it is impossible to prove the absence of all bugs" and simultaneously say "but things sure seem to be working great, so we can get on with life and fix em if/when they pop up".
Are C++ features really that much better thought out? Modules were "standardized" half a decade ago, but the list of problems with actually using them in practice is still pretty damn long to the point where adoption is basically non-existent.
I'm not going to pretend to be nearly as knowledgeable about C++ as Rust, but it seems like most new C++ features I hear about are a bit janky or don't actually fit that well with the rest of the language. Something that tends to happen when designing things in an ivory tower without testing them in practice.
I think the bigger point here is that doing SIMD in Rust is still painful.
There are efforts like portable-simd [1] to make this better, but in practice, many people are dropping down to low-level SIMD intrinsics and/or inline assembly, which are no better than their C equivalents.
And even if you try to provide some kind of safer abstraction, you're limited by the much more primitive type system, that can't distinguish between owned types, unique borrows, and shared borrows, nor can it distinguish thread safety properties.
So you're left to convention and documentation for that kind of information, but nothing checking that you're getting it right, making it easy to make mistakes. And even if you get it right at first, a refactor could change your invariants, and without a type system enforcing them, you never know until someone comes along with a fuzzer and figures out that they can pwn you
The fundamental problem with modules is that build systems for C++ have different abstractions and boundaries. C++ modules are like Rust async - something that just doesn't fit well with the language/system and got hammered in anyway.
The reason it seems like they come from nowhere is probably because you don't know where they come from. Most things go through boost, folly, absl, clang, or GCC (or are vendor-specific features) before going to std.
That being said, it's not just C++ that has this flow for adding features to the language. Almost every other major language that is not Rust has an authoritative specification.
> Is the Rust compiler a better optimizing compiler than C compilers?
First, I assume that the main Rust compiler uses LLVM. I also assume (big leap here!) that the LLVM optimization process is language agnostic (ChatGPT agrees, whatever that is worth). As long as the language frontend can compiler to LLVM language-independent intermediate representation (IR), then all languages can equally benefit from the optimizer.Rust's type system (including ownership and borrowing, Sync/Send, etc), along with it's privacy features (allowing types to have private fields that can only be accessed by code in the module that defined them) allows you to create fully safe interfaces around code that uses unsafe; there is provably no combination of uses of the interface which lead to undefined behavior.
Now, yeah, it's possible to also use unsafe in Rust just for applying a local optimisation. And that has fewer benefits than a fully encapsulated safe interface, though is still easier to audit for potential UB than C.
So you're right that it's on a continuum, but the distinction between safe and unsafe code means you can more easily find the specific places where UB could occur, and the encapsulation and type system makes it possible to create safe abstractions over unsafe code.
Bad C programmers though? Their stuff is more dangerous and they don't know when and don't call it out and should probably stick to Rust.
All languages at some point interface with syscalls or low level assembly that can be done wrong, but one of Rust's selling points is a safe wrapping of low-level interactions. Like safe heap allocation/deallocation with `Box`, or swapping with `swap`, etc. Except... here.
Why does a library like zlib need to go beyond Rust's safe offerings? Why doesn't rust provide safe versions of the constructs zlib needs?
I take a hard line on this stuff because we can either keep repeating the fundamental mistake of believing things like "willpower" to write correct code are real, or we can move on and adopt better tooling.
I'm currently working with ~150 dependencies in my current project which I know would be a major hurdle in previous C or C++ projects.
Of course, in practice, even in Rust, it isn't strictly true that programs without unsafe can't crash with fatal runtime errors. There's always stack overflows, which will crash you with a SIGABRT or equivalent operating system error.
This take makes me sad. There are a lot of reasons why an open source contributor may not see something through. "Lack of discipline" is only one of them. Others that come to mind are: lack of time, lack of resources, lack of capability (i.e good at writing code, but struggles to navigate the social complexities of sheparding a significant code change), clinically impaired ability to "stay the course" and "see things through" (e.g. ADHD), or maybe it was a collaborative effort and some of the parties dropped out for any of the aforementioned reasons.
I don't have a solution, but it does kinda suck that open source contribution processes are so dependent on instigators being the responsible party to seeing a change all the way through the pipeline.
I might be old, but more than 10 years ago, hardly anyone talked about UB in C and C++ programming. In the last 10 years, it is all the rage, but seems to add very little to the conversation. For example, if you program C or C++ with the Win32 API, there are loads of weird UB-ish things that seem to work fine.
If you take Rust at face value, than this to me seems like an obvious question to ask
Language designers admittedly should worry about constant breakage but it’s fine to have some churn, and we shouldn’t be so concerned of it that it freezes everything
This is not how compilers work. Optimization happens based on language semantics, not on what platforms do.
For example, cargo-vet and cargo-crev allow you to rely on others you trust to help audit dependencies.
> absolutely laughable c-strings that perform terribly
Not much being said here in 2025. Any good project will quickly switch to a tiny structure that holds char* and strlen. There are plenty of open source libs to help you. > For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.
I'm sure you already know this, but you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs. > However, for the autovectorizer to do a good job you have to write code in a very special way
Can you give an example of this "very special way"?The code has a C style to it, but that doesn't mean it wasn't actually written in Rust -- Rust deliberately has features to support writing this kind of code, in concert with safer, stricter code.
Imagine if we applied this standard to C code. "Zlib-NG is basically written in assembler, not C..." https://github.com/zlib-ng/zlib-ng/blob/50e9ca06e29867a9014e...
Every interaction I've had with a rust programmer has led me to believe they are a toxic community of cultists. It's unlike any programming community I've seen.
UB in C is often found where different real hardware architectures had incompatible behavior. Rather than biasing the language for or against different architectures they left it to the compiler to figure out how to optimize for the cases where instruction behavior diverge. This is still true on current architectures e.g. shift overflow behavior which is why shift overflow is UB.
> What language is the JVM written in?
I am pretty sure it is C++.I like your second paragraph. It is well written.
For example, it is perfectly legal to dereference a vector pointer that references illegal memory if you mask the illegal addresses. This is a useful trick and common in e.g. idiomatic AVX-512 code. The mask registers are almost always computed at runtime so it would be effectively impossible to determine if a potentially illegal dereference is actually illegal at compile-time.
I suspect we’ll be hand-rolling unsafe SIMD for a long time. The different ISAs are too different, inconsistent, and weird. A compiler that could make this clean and safe is like fusion power, it has always been 10 years away my entire career.
This is such a widespread misunderstanding… one of the points of rust (there are many other advantages that have nothing to do with safety, but let’s ignore those for now) is that you can build safe interfaces, possibly on top of unsafe code. It’s not that all code is magically safe all the time.
Instead, all that functionality is written as Rust code in the standard library, such as Vec. This is what I mean by using unsafe code to "teach" the borrow checker: the language itself doesn't have any notion of growable arrays, so you use unsafe to define its semantics and interface, and now the borrow checker understands growable arrays. The alternative would be to make growable arrays some kind of compiler magic, but that's both harder to implement correctly and not generalizable.
> you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.
That's true and that's a great design pattern in C as well. But there are some crucial differences:
- Rust has no undefined behavior outside of unsafe blocks. This means you only need to audit unsafe blocks (and any invariants they assume) to be sure your program is UB-free. C does not have this property even if you code defensively at interface boundaries.
- In Rust, most of the invariants can be checked at compile time; the need for runtime asserts is less than in C.
- C provides no way to defend against dangling pointers without additional tooling & runtime overhead. For instance, if I write a dynamic vector and get a pointer to the element, there's no way to prevent me from using that pointer after I've freed the vector, or appended an element causing the container to get reallocated elsewhere.
Rust isn't some kind of silver bullet where you feed it C-like code and out comes memory safety. It's also not some kind of high-overhead garbage collected language where you have to write unsafe whenever you care about performance. Rather, Rust's philosophy is to allow you to define fundamental operations out of small encapsulated unsafe building blocks, and its magic is in being able to prove that the composition of these operations is safe, given the soundness of the individual components.
The stdlib provides enough of these building blocks for almost everything you need to do. Unsafe code in library/systems code is rare and used to teach the language of new patterns or data structures that can't be expressed solely in terms of the types exposed by the stdlib. Unsafe in application-level code is virtually never necessary.
Inside that block, both yes and no. You have to enforce those nice guarantees yourself. Code that violates it will still crash.
But it can occur naturally in natural gas.
There was a bug open about it and the rationale was that no one with the expertise (some of these are quite arcane) was stepping up to do it. (edit: other comments in this thread suggest that this effort is now underway and first changes were committed a few weeks ago)
You can do safe SIMD using std::simd but it is nightly only at this point.
Like, when I say "use signal, it's secure", someone could respond "Ahh, but technically you can't prove the absence of bugs, signal could have serious bugs, so it's not secure, you fool", but like everyone reading this already knew "it's secure" means "based on current evidence and my opinion it seems likely to be more secure than alternatives", and it got shortened. Interpreting things as absolutes that are true or false is pointless debate-bro junk which lets you create strawmen out of normal human speech.
When someone says "1+1 = 2", and a debate-bro responds "ahh but in base-2 it's 10 you fool", it's just useless internet noise. Sure, it's correct, but it's irrelevant, everyone already knows it, the original comment didn't mean otherwise.
Responding to "safe Rust should never cause out-of-bounds access, use-after-free" with "ahh but we can't prove the compiler is safe, so rust isn't safe is it??" is a similarly sorta response. Everyone already knows it. It's self-evident. It adds nothing. It sounds like debate-bro "I want to argue with you so I'm saying something that's true, but we both already know and doesn't actually matter".
I think that allergic response came out, apologies if it was misguided in this case and you're not being a debate-bro.
Fully-thought-out and feature-complete is something that since C++17 has been hardly happening.
If you want a clean crash instead on indeterministic behavior, you need to use assert like in C, but it won't save you from compiler optimization removing checks that are deemed useless (again, exactly like in C).
It's a bad comparison since CO doesn't smell, which is what makes it dangerous, while H2S is detected by our sense of smell at concentrations much lower than the toxic dose (in fact, its biggest dangers comes from the fact that at dangerous concentration it doesn't even smell anything due to our receptors being saturated).
It's not what's being put in natural gas, but it wouldn't be that dangerous if we did.
That being said, most rust programs don't ever need to use unsafe directly. If you go very low level or tune for prrformance it might become useful however.
Or if you're lazy and just want to stop the borrow checker from saving your ass.
I took 15 minutes to write one in Rust (a language I had just learned by that point) using a "that should work" approach and became second place, with some high effort C-implementations being slower and a highly optimized assembler variant taking first place.
Since then I programmed a lot more in C and C++ as well (for other reasons) and got more experience. Rust is not automatically faster, but the defaults and std library of Rust is so well put together that a common-sense approach will outperform most C code without even trying – and it does so while having typesafety and memory safety. This is not nothing in my book and still extremely impressive.
The best thing about learning Rust however was how much I learned for all the other languages. Because what you learn there is not just how to use Rust, but how to program well. Understanding the way the Rust borrow checker works 1000% helped me avoiding nasty bugs in C/C++ by realizing that I violatr ownership rules (e.g. by having multiple writers)
There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.
https://www.gnu.org/software/libc/manual/html_node/Range-of-...
> return -1; // or any value that indicates an error/overflow
-1 is a perfectly valid average for various inputs. You could return the larger type to encode an error value that is not a valid output or just output the error and average in two distinct variables.
AI and C seem like a match made in hell.
For example, he says they didn’t set out to improve the code, but they were porting decennia-old C code to rust. Given the subject (truetype font parsing and rendering), my guess would be that the original code had more memory copies copying data out of the font data because rust makes it easier to safely avoid that (in which case the conclusion would be “C could be as fast, but with a lot more effort”), but it could also be that they spent a day figuring out some code did to realize that it wasn’t necessary on anything after Windows 95, and stripped it out, rather than porting it.
Personally I would still use unsafe safe rust than raw C which has more edge cases. Also when I’m not on the critical path I can always use safe rust.
People seem to disagree.
Unsafe Rust Is Harder Than C
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
I'm happy to share then. Here's my most recent encounter with a rustacean: https://x.com/_chjj/status/1829989494298460636
I asked if he/she/they had ever used the unsafe keyword. That was the response I got. It's usually some vile insult involving furry or transgender genitalia.
- Java 74.1%
- C++ 14.0%
- C 7.9%
- Assembly 2.7%
And those values have been increasing for Java with each OpenJDK release.
> Ethanethiol (EM), commonly known as ethyl mercaptan is used in liquefied petroleum gas (LPG) and resembles odor of leeks, onions, durian, or cooked cabbage
Methanethiol, commonly known as methyl mercaptan, is added to natural gas as an odorant, usually in mixtures containing methane. Its smell is reminiscent of rotten eggs or cabbage.
...but you can still call it "mercaptan" and be ~ correct in most cases.
Rust zlib is faster than zlib-ng, but the latter isn't a particularly fast C contender. Chrome ships a faster C zlib library which Rust could not beat.
Rust beat C by using pre-optimized code paths and then C function pointers inside unsafe. Plus C SIMD inside unsafe.
I'd summarize the article as: generous chunks of C embedded into unsafe blocks help Rust to be almost as fast as Chrome's C Zlib.
Yay! Rust sure showed it's superiority here!!!!1!1111
Yes, if your code in Lang-X is faster than C, it's almost certainly a skill issue somewhere in the C implementation.
However, in the day-to-day, if I can make my code run faster in Lang-X than C, especially if I'm using Lang-X for only a couple of months and C potentially for decades, that is absolutely meaningful. Sure, we can make the C code just as fast, but it's not viable to spend that much time and expertise on every small issue.
Outside of "which lang is better" discussions on online forums, it doesn't matter how fast you can theoretically make your program, it matters how fast you actually make it with the constraints your business have (time usually).
Some codebases, you can grep for "unsafe", find no results, and conclude the codebase is safe... if you trust its dependencies.
This is not one of those codebases. This one uses unsafe liberally, which tells you it's about as safe as C.
"unsafe behaviour is clearly marked" seems to be a thought-stopping cliche in the Rust world. What's the point of marking them, if you still have them? If every pointer dereference in C code had to be marked unsafe (or "please" like in Intercal), that wouldn't make C any better.
So I called the poster out to show proof. So far there's none, except one Twitter post (because we all know that's the best technical discussion forum on the planet, clearly) which does not surprise me at all.
So they are the ones who get triggered by something that does not exist.
That is what is toxic.
If you go around claiming fantasies and people call you out then that falls more under curiosity and discussion. Not toxicity.
Toxic people are everywhere on the net. That's not an interesting insight. If you point us at some lunatic on Twitter who loses their marbles over everything, that's not interesting either.
Do you get trolled on actual technical forums though?
As for dependencies: zlib, zlib-ng and zlib-rs all obviously need some access to OS APIs for filesystem access if compiled with that functionality. At least for zlib-rs: if you provide an allocator and don't need any of the file IO you can compile it without any dependencies (not even standard library or libc, just a couple of core types are needed). zlib-rs does have some testing dependencies though, but I think that is fair. All in: all of them use almost exactly the same external dependencies (i.e.: nothing aside from libc-like functionality).
zlib-rs is a bit bigger by default (around 400KB), with some of the Rust machinery. But if you change some of that (i.e. panic=abort), use a nightly compiler (unfortunately still needed for the right flags) and add the right flags both libraries are virtually the same size, with zlib at about 119KB and zlib-rs at about 118KB.
Update its how the std lib does it: https://doc.rust-lang.org/src/alloc/collections/linked_list....
Coming up with these niche examples of things you need unsafe for in order to discredit rust’s safety guarantees is just not interesting. What fraction of programmer time is spent writing custom linked lists? Surely way less than 1%. In most of the other 99%, Rust is very helpful.
> The result is a better performing and easier to maintain zlib-ng.
So they’re comparing a first pass rewrite against a variation of zlib designed for performance
EDIT: but I do agree that starting greenfield from an old code base is often a path towards performance.
I'm always reminded of this video, where the author writes the same program in Rust and Go.
https://www.youtube.com/watch?v=Z0GX2mTUtfo
> Now, the Rust version took me about five times as long as the Go version
> The Go one performed almost identically well
Now this was for netcode rather than number crunching. But I actually had a similar surprise with number crunching, with C# and C++. I wrote the same program (rational approximation of Pi), line for line, in both languages, and the C# version ran faster. Apparently C# aggressively optimizes hot code paths while running, whereas to get that behavior in C++, you need to collect profiler data and use a special compiler flag.
There was Ispc, which was a separate C-like programming language just for SIMD, but I don't understand why can't regular compilers generated high-quality vectorized code.
It is interesting how you can't see that you are inflating one nut case and extrapolating to an entire community.
We absolutely should, if someone claimed/implied-via-headline that naive C was natively as fast as hand-tuned assembly! This kind of context matters.
FWIW: I'm not talking about the assembly in zlib-rs, I was specifically limiting my analysis to the rust layers doing memory organization, etc... Discussing Rust is just exhausting. It's one digression after another, like the community can't just take a reasonable point ("zlib-rs isn't a good example of idiomatic rust performance") on its face.
Not necessarily—sometimes languages are especially poorly suited for tasks or difficult to hire for.
I have no idea what your definition of encapsulation is, but mine is not this.
It's really only encapsulated in the sense that if you have a finite and small set of unsafe blocks, you can audit them easier and be pretty sure that your memory safety bugs are in there. This reality really doesn't exist much anymore because of how much unsafe is often ued, and since you you have to audit all of them, whether they come from a library or not, it's not as useful to claim encapsulation as one thinks.
I do agree in theory that unsafe encapsulation was supposed to be a thing, but i think it's crazy at this point to not admit that unsafe blocks turned out to easily have much more global effects than people expected, in many more cases, and are used more readily than expected.
Saying "scaling reasoning" also implies someone reasoned about it, or can reason about it.
But the practical problem is the same in both cases - someone got the reasoning wrong and nothing flagged it.
Wanna go search github for how many super popular libraries using unsafe had global correctness issues due to local unsafe blocks that a human reasoned incorrectly about, but something like miri found? Most of that unsafety that turned out to be buggy also was done for (unnecessary) performance reasons.
What you are saying is just something people tell themselves to make them feel okay about using unsafe all over the place.
If you want global correctness, something has to verify it, ideally not-human.
In the end, the thing C lacks is tools like miri that can be used practically with low false-positives, not "encapsulation" of unsafe code, which is trivially easy to perform in C.
Let's not kid ourselves here and end up building an ecosystem that is just as bad as the C one, but our egos refuse to allow us to admit it. We should instead admit our problems and try to improve.
Unsafe also has legitimate use cases in rust, for sure - but most unsafe code i look at does not need to exist, and is not better than unsafe C.
I'll give you an example: There are entire popular embedded bluetooth stacks in rust using unsafe global mutable variables and raw pointers and ..., across threads, for everything.
This is not better than the C equivalent - in fact it's worse, because users think it is safe and it's very not.
At least nobody thinks the C version is safe. It will often therefore be shoved in a binary that is highly sandboxed/restricted/etc.
It would be one thing if this was in the process of being ported/translated from C. But it's not.
Using intrinsics that require alignment and the API was still being worked on - probably a reasonable use of unsafe (though still easy to cause global problems like buffer overflows if you screwed up the alignment)
The bluetooth example - unreasonable.
The usual retort to these questions is 'well, the standard library uses unsafe code, so everything would need a disclaimer that it uses unsafe code, so that's a useless remark to make', but the basic issue still remains that the only clear boundary is whether a function 'contains' unsafe code, not whether a function 'calls' unsafe code.
If Rust did not have a mechanism to use external code then it would be fine because the only sources of unsafe code would be either the application itself or the standard library so you could just grep for 'unsafe' to find the boundaries.
It’s trivial to find examples of people in any community who are a bit off the rails, but you shouldn’t let that define your perception of the community, especially given the fact that you’re currently in a context where your thesis doesn’t have much to support it.
What do you mean by that?
There is plenty of hand-rolled assembly in low-level libraries, whether you look at OpenBLAS (17%), GMP (36%), BoringSSL (25%), WolfSSL (14%) -- all of these numbers are based on looking at Github's language breakdown (which is measured on a per-file basis, so doesn't count inline asm or heavy use of intrinsics).
There are contexts where you want better performance guarantees than the compiler will give you. If you're dealing with cryptography, you probably want to guard against timing attacks via constant-time code. If you're dealing with math, maybe you really do want to eke out as much performance as possible, autovectorization just isn't doing what you want it to do, and your intrinsic-based code just isn't using all your registers as efficiently as you'd like.
But yes you are technically correct, congratulations.
If you're referring to my above post, I'm pointing out that you're having a very emotional reaction to what I'm saying. That's typically what I see from rust developers.
There are uses for this, especially since some code will run in environments where you can not simply handle it, but it's also just cleaner this way; you don't have to worry about the different behaviors between operating systems and possibly CPU architectures with regards to error recovery if you simply don't generate any.
Since there are these edge cases where it wouldn't be possible to handle faults easily (e.g. some kernel code) it needs to be considered unsafe in general.
a) are surprisingly nontrivial to get right,
b) have almost no practical uses, and
c) are only taught because they're conceptually nice and demonstrate pointers and O(1) vs O(n) tradeoffs.
Note that safe Rust has no problems with singly-linked lists or in general any directed tree structure.
It's also possible to go a step further and practice "panic-free" Rust where you write code in such a way that it never links to the panic handler. Seems pretty hard to do, but seems like it might be worth it sometimes, especially if you're in an environment where you don't have anything sensible to do on a panic.
C has to make a syscall to the kernel which ultimately results in a BIOS interrupt to implement printf, which you need for the hello world program on page 1 of K&R.
Does that mean that C has no abstraction advantage over directly coding interrupts with asm? Of course not.
It is somewhat similar, actually, when someone states a negative opinion on Rust community and marketing around it. It is usually followed by those that say "you met wrong people".
Of course, you won't find any examples in this thread xd.
For the record, I only picked Rust 5-ish years ago out of a 23 years of career. I know plenty of other languages. I was a skeptic at the start as well. Never generalized a pretty big group like you do though.
You should be ashamed.
This is not exactly true. Even in production code, unsafe preconditions check if you violate these rules.
Here: https://doc.rust-lang.org/core/macro.assert_unsafe_precondit... And here: https://google.github.io/comprehensive-rust/unsafe-rust/unsa...
Maybe the reason I think that is because I've written Rust for a variety of purposes (web application, database bindings, high performance parser) so I account for the "register" of Rust that is appropriate without thinking about it.
https://en.wikipedia.org/wiki/Register_(sociolinguistics)
It might be that a simple description like the headline leads some people to believe they could write Rust the easy way and get code that's as fast as writing "Rust the hard way".
However, that is different than what you earlier said -- "It's... basically written in C.". I have actually written Rust programs where some parts were literally written in C and linked in -- in order to build functioning plugins -- and there is a world of difference with that.
Regarding
Discussing Rust is just exhausting. It's one digression after another, like the community can't just take a reasonable point ("zlib-rs isn't a good example of idiomatic rust performance") on its face.
I'm just not sure what to say to this. What do you expect from me, here?
What I dislike, if we can even call it that, is that you misrepresent intentionally and are falling victim to extremely easy to avoid ego trips like claiming that your anecdotal evidence is universal.
That is not OK and is not intellectually fair.
Be intellectually fair. If you are not then I posit that you don't belong in tech as you have no scientific and analytic approach to things. That's my takeaway here.
You have left an extensive record of your bias in multiple comments. Including purposeful deflection and projection, as you try to make it out that I react emotionally. Which is false.
You seem like a lost cause though. So bye.
I don't write software targetting nightly, for good reason.
What is actually funny in our exchanges is that I don't even actively work with Rust anymore. I work with multiple languages, it included. I've met very smart, humble and fairly hardcore [Rust] devs from whom I learned a lot and got severely humbled as a result (as I was under the illusion that there's not much more I could learn in programming back then).
My other comments are fairly trivial English. Surely you can very easily make something out of them.
You design an abstraction which is unsafe inside, and exposes a safe API to users. That is really how unsafe it meant to be used.
Of course the standard library uses unsafe. This is where you want unsafe to be, not in random user code. That's what it was made for.
The `memchr` crate, for example, has an entirely safe API. Nobody needs to use `unsafe` to use any part of it. But its internals have `unsafe` littered everywhere. Could the crate have bugs that result in UB due to a particular use of the `memchr` API? Yes! Doesn't that violate encapsulation? No! A bug inside an encapsulated boundary does not violate the very idea of encapsulation itself.
Encapsulation is about blame. It means that if `memchr` exposes a safe API, and if you use `memchr` and you get UB as a result of some `unsafe` code inside of `memchr`, then that means the problem is inside of `memchr`. The problem is definitively not with the caller using the library. That is, they aren't "holding it wrong."
I'm surprised that someone with as much experience as you is missing this nuance. How many times have you run into a C library API that has UB, you report the bug and the maintainer says, "sorry bro, but you're holding that shit wrong, your fault." In Rust, the only way that ought (very specifically using ought and not is) to be true is if the API is tagged with `unsafe`.
Now, there are all sorts of caveats that don't change the overall point. "totally safe transmute" being an obvious demonstration of one of them[1] by fiddling with `/proc/self/mem`. And of course, Rust does have soundness bugs. But neither of these things change the fundamental idea of encapsulation.
And yes, one obvious shortcoming of this approach is that... well... people don't have to follow it! People can lie! I can expose a safe API, you can get UB and I can reject blame and say, "well you're holding it wrong." And thus, we're mostly back into how languages like C deal with these sorts of things. And that is indeed a bummer. And there are for sure examples of that in the ecosystem. But the glaring thing you've left out of your analysis is all of the crates that don't lie and specifically set out to provide a sound API.
The great thing about progress is that we don't have to perfect. I'm really disappointed that you seem to be missing the forest for the trees here.
[1]: https://github.com/ben0x539/totally-safe-transmute/blob/main...
Bugs happen, they're bound to. Its more, what is enforcing the Rust language guarantees and how do we know its enforcing them with reasonably high accuracy one can ascertain?
I feel that it can only happen as Rust itself becomes (or perhaps it meaningfully already is) written in pure 100% safe Rust itself. At which point, I believe the matter will be largely settled.
Until then, I don't think its unreasonable for someone to ask about how it verifies its assertions is all.
Put another way, there's no issues with a library using its own heap if it wants to.
That's not the case since the late 1990s. Other than during early boot, nobody calls into the BIOS to output text, and even then "BIOS interrupt" is not something normally used anymore (EFI uses direct function calls through a function table instead of going through software interrupts).
What really happens in the kernel nowadays is direct memory access and direct manipulation of I/O ports and memory mapped registers. That is, all modern operating systems directly manipulate the hardware for text and graphics output, instead of going through the BIOS.
Well, no, actually. At least, not in an (IMHO) useful way.
I can break your safe API by getting the constraints wrong on unsafe code inside that API.
Also, unsafe usage elsewhere is not local. I can break your impossible to misuse API through an unsafe API that someone else used elsewhere, completely outside my control, and then wrapped in a safe API. Some of these are of course, bugs in rust/compiler, etc. I'm just offering i've yet to hear the view taken that the ability to do this is always a bug in the language/compiler, and will be destroyed on sight.
Beyond that:
To the degree this is useful encapsulation for tracking things down, it is only useful when the amount is small and you can reason about it.
This is simply no longer true in any reasonably sized rust app.
As a result, as you say, it is then only useful for saying who is at fault in the sense of whether i'm holding it wrong. To me, that is basically worthless at scale.
"I'm surprised that someone with as much experience as you is missing this nuance."
I don't miss it - I just don't think it's as useful as claimed.
This level of "encapsulation", which provides no real guarantee except "the set of bugs is caused somewhere by the set of unsafe blocks" is fairly unhelpful at large scale.
I have audited hundreds of thousands of lines of rust code to find bugs caused by unsafe usage. The thing that made it at all tractable was not this form of encapsulation - it was in fact, 100% worthless in doing that at scale because it was till tons and tons and tons of code to try to reason about, across lots of libraries and dependencies. As you say, it only helps provide blame once found, and blame is not that useful at scale. It does not make the code safer. It does not make it easier to track down. It only declares, that after i've spent all the time, that it is not my fault. But also nobody has to do anything anyway.
For small programs, this buys you something, as i said, as long as the set of unsafe blocks is small enough to be tractable to audit, cool. You can find bugs easier. In that sense, the tons of hobby programs, small libraries, etc, are a lot less likely to have bugs when written in rust (modulo their dependencies on unsafe code).
But like, your position seems to be that it is fairly useful that i can go to a library and tell them "your crap is broken", and be right about it. To me, this does not buy a lot in the kinds of large complex systems rust hopes to replace in C/C++. (it also might be false)
In actually tracking down the bug, which is what i care about, the thing that was useful is that i could run miri and lots of other things on it and get useful results that pointed me towards the most likely causes of issues..
So don't get me wrong - this is overall better than C, but writing lots of rust (i haven't written C/C++ at all in a while, actually) I still tire of the constant claims of the amount of rust safety. You are the rare rust person who understand the nuance and is willing to admit there is any flaw or non-perfection whatsoever.
A you say, there are lots of things that ought to be true in rust that are not. You have a good understanding of this nuance, and where it fails.
But it is you, i believe, who is missing the forest for the trees, because most do not have this.
I'll be concrete and i guess controversial in a way you are 100% free to disagree with, but might as well throw a stake in the ground - it's hacker news, might as well have fun making a comment someone can beat me over the head with later: If nothing changes, and the rust ecosystem grows by a factor of 100x while changing nothing about how it behaves WRT to unsafe usage, and no tooling gets significantly better, Rust will not end up better than C in practice. I don't mean - it will not have less bugs/vulnerabilities - i think it would by far!
But whether you have 100 billion of them, or 1 billion of them, and thus made a 100x improvement, i don't think matters too much when it's still a billion :)
Meanwhile, if the rust ecosystem got worse about unsafe, but made tools like Miri 50x faster (and made more tools like it that help verification in practice), it will not end up better than C.
To me - it is the tooling, and not this sort of encapsulation, that will make a practical difference or not at scale.
The idea that you will convince people not to write broken unsafe code, in ways that breaks safe APIs, or that the ability to assign blame matters, is very strange to me, and is no better than C. As systems grow, the likelihood of totally safe transmutes growing in them is basically 100% :)
FWIW - I also agree you don't have to be perfect, nor do I fault rust for not being perfect. Instead, i simply disagree that at scale, this sort of ability to place blame is useful. To me, it's the ability to find the bugs quickly and as automated as possible that is useful.
I need to find the totally safe transmutes causing issues in my system, not hand it to someone else after determining it couldn't be my fault.
The point is that you don't need to. The guarantees compose.
> The usual retort to these questions is 'well, the standard library uses unsafe code
It's not about the standard library, it's much more fundamental than that: hardware is not memory safe to access.
> If Rust did not have a mechanism to use external code then it would be fine
This is what GC'd languages with runtimes do. And even they almost always include FFI, which lets you call into arbitrary code via the C ABI, allowing for unsafe things. Rust is a language intended to be used at the bottom of the stack, and so has more first-class support, calling it "unsafe" instead of FFI.
That used to be the case for 32-bit platforms, but most 64-bit platforms in which GNU libc runs use the LP64 model, which has 32-bit int and 64-bit long. That documentation seems to be a bit outdated.
(One notable 64-bit platform which uses 32-bit for both int and long is Microsoft Windows, but that's not one of the target platforms for GNU libc.)
I wonder whether you believe these people would ever be endorsed by the faces of the Rust language or whether the majority of people in the community would behave so. In my experience (not to minimise yours), the Rust community and FOSS in general, are some of the most open and welcoming communities online, albeit with clear exceptions
Yes, there is a boundary, and usually it's either the function itself, or all methods of an object. For instance, a function I wrote recently goes somewhat like this:
fn read_unaligned_u64_from_byte_slice(src: &[u8]) -> u64 {
assert_eq!(src.len(), size_of::<u64>());
unsafe { std::ptr::read_unaligned(src.as_ptr().cast::<u64>()) }
}
The read_unaligned function (https://doc.rust-lang.org/std/ptr/fn.read_unaligned.html) has two preconditions which have to be checked manually. When doing so, you'll notice that the "src" argument must have at least 8 bytes for these preconditions to be met; the "assert_eq!()" call before that unsafe block ensures that (it will safely panic unless the "src" slice has exactly 8 bytes). That is, my "read_unaligned_u64_from_byte_slice" function is safe, even though it calls unsafe code; the function is the boundary between safe and unsafe code. No callers of that function have to worry that it calls unsafe code in its implementation.On modern aschitectures you shouldn't use either unless you have an extremely niche use-case. They are not general use data structures anymore in a world where cache locality is a thing.
This doesn't make any sense at all as a broader point. Of course you can break the safe API by introducing a bug inside the implementation! I honestly just cannot figure out how you have a misunderstanding of this magnitude, and I'm forced to conclude that we are mis-communicating at some level.
I did read the rest of your comment, and the most significant point I can take away from it is that you're making a claim about scale. I think the dissonance introduced with comments like the one above makes it very hard for me to trust your experience here and the conclusions you've drawn from it. But I will note that whether Rust's safety story scales is from my perspective a different thing entirely from the factual claim that Rust enables safe encapsulation of `unsafe` usage.
You may say that just because Rust enables safe encapsulation doesn't mean programmers using Rust actually follow through with that in practice. And yes, absolutely, it doesn't. You can't derive an is from an ought. But in my experience, it totally does. I do work on lots of "hobby" stuff in Rust (although I try to treat it professionally, I just mean that I am not directly paid for it beyond donations), but I am also paid to write Rust too. I do not have your experience with Rust at scale, so I cannot refute it. But you've said enough questionable things here that I can't trust it either.
Literally that's what you're saying: they have a different opinion therefore they're zealots!
Unless people posting an opinion is itself zealotry... but in that case why are you complaining about the replies and not the comments they reply to?
> Safe Rust: memory safe, no undefined behavior possible. Unsafe Rust: can trigger undefined behavior if preconditions are violated.
So Unsafe Rust from a UB perspective is no different than C/C++. If preconditions are violated, UB can occur, affecting anywhere in the program. Its unclear how the compiler could check anything about preconditions in a block explicitly used to say that the developer is the one upholding the preconditions.
By claiming they are hard-working, you are generalizing. It is usually only a couple or few that are actually hard-working people, but then again, I am generalizing because I do not know, I do not wish to actually claim to know.
Judging by your comments, e.g. "you should be ashamed" (for simply expressing his dislike of YOUR community), you sound exactly like a zealot.
Why do you feel the need to claim moral superiority and tell someone to be ashamed just for simply expressing their dislike of your community? And while we are at it, he probably dislikes the community because of people like you. We have gone full circle.
I am not even going to bother commenting on a lot of things you have said, but:
> My other comments are fairly trivial English. Surely you can very easily make something out of them.
Sounds condescending as well, but this is a minor nitpick. :)
In c++ i could do something like:
x_ptr = new object y_ptr = x_ptr
copy(x_ptr, y_ptr)
In safe rust there is no way to call the function in question if that sort of aliasing has happened. This means that if you get a bug from your copy, its in the copy method - the possibility it's been used inappropriately has been eliminated.
It reduces the search space for problems from: everywhere that created a pointer that is ultimately used in the copy, to: the copy function itself.
It reduces the number of programmers who have to keep the memory semantics of that copy in their head from "potentially everyone" to just "those who directly implement and check copy".
Pretending that has no value is absurd.
Obviously the code isn't going anywhere, and obviously we DO have reliable code we've built with C. But acting like C and Rust deliver equivalent value is simply farcical: you choose C for rapid development and cheap devs (or some other niche concern, like using an obscure embedded arch), and you choose rust to solve the problems that C introduced.
Using raw pointers in unsafe Rust is easier than using raw pointers in C.
The solution is to not manipulate references in unsafe code. The problem is that in old versions of Rust this was tricky. Modern versions of Rust have addressed this by adding first-class facilities for producing pointers without needing temporary references: https://blog.rust-lang.org/2024/10/17/Rust-1.82.0.html#nativ...
You can implement that linked list just once, audit the unsafe parts extensively, provide a fully safe API to clients, and then just use that safe API in many different places. You don't need thousands of project-specific linked list reimplementations.
Is it? I've written hundreds of thousands of lines of production Rust, and I've only sparingly used unsafe. It's more common in some domains than others, but the observed trend I've seen is for people to aggressively encapsulate unsafe code.
Unsafe Rust is quite difficult to write correctly. (The &mut provenance rules are a bit scary!) But once a safe abstraction has been built around it and the unsafe code has passed Miri, in practice I've seen people be able to not worry about it any more.
By the way I maintain cargo-nextest, and we've added support for Miri to make its runs many times faster [1]. So I'm doing my part here!
If you're writing your program in C, you're afraid of shooting yourself in the foot and introducing security vulnerabilities, so you'll naturally tend to avoid significant refactorings or complicated multithreading unless necessary. If you have Rust's memory safety guarantees, Go's channels and lightweight goroutines, or the access to a test runner from either of those languages, that's suddenly a lot less of a problem.
The compiler guarantees you get won't hurt either. Just to give a simple example, if your Rust function receives an immutable reference to a struct, it can rely on the fact that a member of that struct won't magically be mutated by a call to some random function through spooky action at a distance. It can just keep it on the stack / in a callee-saved register instead of fetching it from memory at every loop iteration, if that's more optimal.
Then there's the easy access to package ecosystems and extensive standard libraries. If there's a super popular do_foo package, you can almost guarantee that it was a bottleneck for somebody at some point, so it's probably optimized to hell and back. It's certainly more optimized than your simple 10-line do_foo function that you would have written in C, because that's easier than dealing with yet another third-party library and whatever build system it uses.
The tooling and the encapsulation go hand in hand.
> The idea that you will convince people not to write broken unsafe code, in ways that breaks safe APIs, or that the ability to assign blame matters, is very strange to me, and is no better than C. As systems grow, the likelihood of totally safe transmutes growing in them is basically 100% :)
To be honest this doesn't track with my experience at all. Unsafe just isn't that commonly used in projects I contribute to. When it is, it is aggressively encapsulated.
Using this I can statically compile a cross-compiler. Total size uncompressed 169.4MB.
I use GCC to compille zlib and a wide variety of other software. I can build an operating system from the ground up.
Perhaps someday during my lifetime it will be possible to compile programs written in Rust using inexpensive computers with modest amounts of memory, storage and relatively slow CPUs. Meanwhille, there is C.
I think exegesis is a skill you need to hone further.
Rust very much can emulate this, with `break` + nested blocks. But not if you also add in `goto` to previous branches
However, keep in mind that zstd also needs much more memory. IIRC, it uses by default 8 megabytes as its buffer size (and can be configured to use many times more than that), while zlib uses at most 32 kilobytes, allowing it to run even on small 16-bit processors.
Kidding aside the 150-comment Unsafe Rust subthread was inevitable.
Whoa. This might be the kick in the ass I needed to give cargo-nextest a whirl in my projects. Miri being slow is the single biggest annoyance I have with it!
No different than how I asked of the Go community how it could produce binaries on any platform for all major platforms it supports (IE, you don't have to compile your Go code on Linux for it to work on Linux, only have to set a flag, with the exception If I recall correctly of CGO dependencies but thats a wild horse anyway)
It's not just "a sufficiently smart compiler", without completely unrealistic (as in "halting problem" unrealistic, in the general case) "smartness".
So no, C is inherently slower than some other languages.
Yes — in C you can skip the bounds-checks and allocation, because you can convince yourself they aren't needed; the problem is you may be wrong, either immediately or after later refactoring.
In other memory-safe languages you don't risk the buffer overrun, but it's likely you'll get the bounds checks and allocation, and you have the overhead of GC.
Rust is close to alone in doing both.
In Rust, the CPU exception resulting from a stack overflow is considered safe. The compiler uses stack probing to ensure that as long as there is at least one page of unmapped memory below the stack (guard page), the program will reliably fault on it rather than continuing to access memory further below. In most environments it is possible to set up a guard page, including Linux kernel code if CONFIG_VMAP_STACK is enabled. But there are other environments where it’s not, such as WebAssembly and some microcontrollers. In those environments, the backend would have to add explicit checks to function prologs to ensure enough stack is available. I say “would have to”, not “does”: I’ve heard that on at least the microcontrollers, there are no such checks and Rust is just unsound at the moment. Not sure about WebAssembly.
Meanwhile, Go uses CPU exceptions to handle nil dereferences.
That said, I actually entirely forgot Go catches nil derefs in a segfault handler. I guess it's not a big deal since Go isn't really suitable for free-standing environments where avoiding CPU exceptions is sometimes more useful, so there's no particular reason why the runtime can't rely on it.
[1] https://bsky.app/profile/lukaswirth.bsky.social/post/3lkg2sl...
average(INT_MAX,INTMAX) should return INT_MAX, but it will get that wrong and return -1.
average(0,-2) should not return a special error-code value, but this code will do just that, making -1 an ambiguous output value.
Even its comment is wrong. We can see from the signature of the function that there can be no value that indicates an error, as every possible value of int may be a legitimate output value.
It's possible to implement this function in a portable and standard way though, along the lines of [0].
[0] https://stackoverflow.com/a/61711253/ (Disclosure: this is my code.)
opening stdout with file handle 0 is not guaranteed safe by the compiler. there's an "unsafe" somewhere in there.
Fair, I appreciate the call-out and it's a valid one.
> Judging by your comments, e.g. "you should be ashamed" (for simply expressing his dislike of YOUR community), you sound exactly like a zealot.
It's not that. I said he should be ashamed because he doubled down on generalizing. Even said he usually does that a lot. To me if you work in tech you should be more analytical and more unforgiving towards your own assessments. We all thought the bug is in X but it turned it was in Y, right? That's what I called out.
As you yourself pointed out, we don't truly know much people in the community are generally nice and hard-working, which I agree is an accurate call for a balanced take.
My problem is the outright negative generalization. I was in the mood and didn't leave him alone about it. He eventually seems to have admitted that he only demonstrated his own anecdotal evidence. I disengaged at that point because that's a valid way to exit a discussion... though I still would worry what kind of people he communicated with if he had such an overwhelming negative experience, and only with its most lunatic members to boot.
You are free to think of me as a zealot but I'd think that's an emotional and unfair reaction and would ask you to revise it. My comments were not a stubborn push-back, but a call to being objective.
> Why do you feel the need to claim moral superiority and tell someone to be ashamed just for simply expressing their dislike of your community? And while we are at it, he probably dislikes the community because of people like you. We have gone full circle.
I claimed analytical superiority, not a moral one. I've met Rust zealots. I've met Golang and (oh boy are they MANY) C/C++ zealots. Even my favorite Elixir has some weird people that think everything should be written with it.
The difference between me and the poster you seem to defend a bit emotionally is that I don't claim my outlier negative experiences are the norm. He did that. I did not.
As for the full circle thing: I ain't giving the other cheek. I don't owe grace to people who are rudely generalizing. I am aware many people would assess me much better if I just gave the other cheek. I know. But I choose not to abide by those expectations. Sadly this leads to people like yourself branding me like a zealot. Regrettable. But it's ultimately your loss for missing out on interesting and informed and unbiased discussions with me.
Feel free to check my comment history. I am not always super level-headed but I always look for the truth.
> Sounds condescending as well, but this is a minor nitpick. :)
Couldn't resist, admittedly. See above. ;)
And it's not "my" community. I don't belong to a single one so I don't emotionally defend any of them.
I mean, by that logic, people of my nationality would have to be fenced off and never allowed in other countries... because we do indeed have thousands of nasty scammers out there in the world.
99.99999% of us are chill, work, pay taxes, have fun etc.
So generalizations like the one that moved me to start pursuing the guy and not leave him alone until he ultimately said "it's just my anecdotal experience" (and stopped claiming it's universally true), because you know, we can pick ANY group, find several lunatics and claim the group is bad in this or that way.
As said to another guy a few minutes ago -- I can get such "opinions" in every bar. I come to HN for better discussions than this.
There is still plenty in my non-embedded stuff, but a fair amount hardware-adjacent (IE i have to drive things like relay cards, just from a desktop machine). to be fair.
But i've found plenty of broken unafe in things like, uh, constraint solvers.
I would agree that useful and successful rust projects aggressively encapsulate (and attempt to avoid) unsafe usage.
I will still maintain my belief that this will not be enough over time and scale.
My suggestion would be - if we are ever in the same place, let's just grab coffee or something.
In the end - i suspect we are just going to find we have different enough experiences that our views of safe encapsulation and its usefulness are very different.
Let's put that aside for a second - I'll also take one more pass at the original place we started, and then give up:
To go back all the way to where we started, the comment i was originally replying to said "No, C lacks encapsulation of unsafe code. This is very important. Encapsulation is the only way to scale local reasoning into global correctness."
So we were in fact talking about scale and more particularly how to scale to global correctness, not really whether rust enables safe encapsulation, but whether encapsulation istelf enables local reasoning to scale to global correctness (In theory or in practice)
My view here, restated more succinctly, is "their claim that encapsulation is the only way to scale local reasoning to global correctness is emphatically wrong" (both in theory and practice).
My argument there remains simple: Tooling is what enables you to scale local reasoning to global correctness, not encapsulation.
Putting aside how useful or not it is otherwise for a second, encapsulation, by itself, does not enable you to reason your way from local results to global results soundly at all - for exactly the reason you mention in the first sentence here - bugs in local correctness reasoning can have global correctness effect. Garbage in, garbage out. Encapsulation does not wave a wand at this and make it go away[1]. There are lot of other reasons, this is just the one we went down a bit of a rabbit hole on :)
Instead, it is tooling that lets you scale. If you can have "catches 95+%" of local reasoning error (feel free to choose your own bar), you can almost certainly parlay that into high-percent global correctness, regardless of whether anything is encapsulated at all or not.
Now: If encapsulation enables an easier job of that tooling, and i believe it helps a lot, fwiw, then that's useful. But it's the tooling you want, not the encapsulation. Again, concretely: If I could not safely encapsulate anything, but had tooling that caught 100% of local reasoning issues, i would be much better off than having 100% safely encapsulated code, but no tooling to verify local or global reasoning. This is true (to me) even if you lower the "catches 100% of local reasoning issues" down significantly.
[1] FWIW, i also don't argue that this problem is particular to rust. It's not, of course. It exists everywhere. But i'm not the one claiming that rust will enable you to scale local reasoning to global correctness through encapsulation :P
The same is true of every programming language. There might be bugs in clang or gcc so how can we prove that they actually follow the C++ spec? We can’t. rustc is no different, but nobody ever claimed it was, so why hold it to a higher standard than clang?
That won't occur on an 'LP64' platform, [0] but we should aim for proper portability and conformance to the C language standard.
[0] https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_m...
Only if you actively disable panics being triggered if unsafe preconditions are triggered. In most code, the program will crash instead. Enabling default panic on up violation in production code was done last year, IIRC.
> Its unclear how the compiler could check anything about preconditions
It can't. This is done at runtime, by default and without manually needed programmer interaction.
You can see an example of this in the `ptr`module, here: https://doc.rust-lang.org/beta/src/core/ptr/mod.rs.html#1071
Some are only enabled for `debug_assert` (which is enabled by default), see `ptr::read`, here: https://doc.rust-lang.org/beta/src/core/ptr/mod.rs.html#1370
This is not insignificant.
Remember xz? That could have been a disaster.
That the language includes a package manager that fetches an assortment of libraries from who knows whom on demand doesn't exactly inspire confidence in the process to me. Alice's secure AES implementation might bring Eve's string padding function along for the ride.
Rust(TM) the language might be (memory) safe in theory but I have serious issues (t)rusting (t)rust and anything built with it.
That's fair. I was focusing more on the factual aspect of "Rust enables encapsulating `unsafe`." But you're right, this statement is making a bigger claim than that, and it crosses over into something that is a (in theory) testable opinion.
I do agree with it though. But I recognize that it is a different claim than the one I was putting forward as factual.
I think for this, I would say that my experience with Rust has demonstrated that encapsulation is working at some non-trivial scale. The extent to which it will continue to scale depends, in part, on whether people writing Rust prioritize soundness. In my bubble, this prioritization is extremely high. But will what is arguably a cultural norm extend out to all Rust programmers everywhere?
I legitimately don't know. This is why I was one of the first (but not the first) people to make a stink about improper `unsafe` usage inside the Actix project some years ago. It was because I perceived the project as specifically flouting the cultural norm and rejecting soundness as a goal to strive for. I do indeed see this as an essential piece of what Rust brings to the table, and for it to succeed in its goals, we have to somehow figure out how to maintain the cultural norm that safe APIs cannot be used in a way that leads to UB.
I think where you and I differ is both in what we've seen (it sounds like you've seen evidence of this cultural norm eroding) and what we consider encapsulation busting. I'm not at all worried about bugs in `unsafe` code. Those are going to happen, and yes, they will lead to safe Rust having UB. But those are "just" bugs. The vastly more important thing to me is intent and where blame is assigned when UB happens. If blame starts shifting to the safe code, then that will indicate the erosion of that cultural norm.
As for tooling, I think it's vital to making sure safe encapsulations are correct, but I don't see it as having a significant impact on the norm.
Then again, these are the days in which even some of the strongest cultural norms we've had (in the United States anyway) have been eroding. So maybe building a system on top of one is folly.
https://zig.news/kristoff/building-sqlite-with-cgo-for-every...
How do you know this exactly?
However, even at runtime it can't do anything to say if (excuse the C pseudocode) *(uint32_t*)0x1C00 = 0xFE is a valid memory operations. On some systems, in some cases it might be.
https://docs.adacore.com/spark2014-docs/html/ug/en/usage_sce...
Look at the table after this paragraph:
> SPARK builds on the strengths of Ada to provide even more guarantees statically rather than dynamically. As summarized in the following table, Ada provides strict syntax and strong typing at compile time plus dynamic checking of run-time errors and program contracts. SPARK allows such checking to be performed statically. In addition, it enforces the use of a safer language subset and detects data flow errors statically.
This is the documentation (namely SPARK User's Guide).
As for what SPARK is: https://learn.adacore.com/courses/intro-to-spark/chapters/01..., so you will be able to see (if you read further), that Ada alone may suffice for the majority of the cases, as for many things you do not even need SPARK to begin with.
Many courses for both Ada and SPARK are available here: https://learn.adacore.com/index.html
There are very good reasons for why Ada is used in critical systems, especially, but not limited to avionics and railway systems, see more at https://www.adacore.com/industries.
What? Where did you get that impression?
> But in any case it seems like its just doing some number of asserts to validate some preconditions
Yeah, like C code normally would, just in the STD in this case.
Maybe the core is that i don't understand why you agree with it :)
Maybe your definition of global correctness is different?
Maybe you are thinking of properties that are different than i am thinking of?
To me, for most (IMHO useful) definitions of global correctness, for most properties, the claim is provably false.
For me, local and global correctness that is useful at scale is not really "user-asserted correctness modulo implementation bugs".
Let's take a property like memory safety and talk about it locally and globally.
Let's just remove some nuance and say lots of these forms of encapsulation can be thought of as assertions of correctness wrt to memory safety (for this example, obviously, there are more things it asserts, and it's not always memory safe in various types of encapsulation) - i assert that you don't have to worry about this - i did, and i'm sure it's right :)
This assertion, once wrong in a local routine, makes a global claim that "this program is memory safe" now incorrect. Your local correctness did not scale to global correctness here, because your wrong local assertion led to a wrong global answer.
Tooling would not have done this.
Does it matter? maybe, maybe not! That's the province of creative security researchers and other folks.
My office mate at IBM was once tasked (eons ago) with computing the probability that a random memory bit flip would actually cause a program to misbehave.
Obviously, you can go too far, and end arguing about whether the cosmic rays affecting your program really violate your proof of correctness :)
But for a property like this, i don't want to rely on norms at scale. Because those norms generate mostly assertions of correctness. Once i've got tons and tons of assertions, and nobody has actually proved anything about them, that's a house of cards. Even if they are diligent and right 99% of the time, if you have 100000 of them, that's uh, 1000 of them that are wrong. and as discussed, it only takes one to break global correctness.
If you want all 100k to be correct with 90% probablity, you'd need people to be 99.9999% correct. That seems unlikely :)
I don't mean that i'm not willing to accept the norm is better - i am. I certainly would agree the average rust program is more bug free and more safe than C ones. But i've seen too much at scale to not want some mechanism of verifying that norm, or at least a large part of it.
As an aside, there are also, to me, properties that are useful modulo implementation bugs. But for me, these mostly fall into proving algorithmic correctness.
IE it's useful to prove that a lock-free algorithm always makes progress, assuming someone did not screw up the implementation. It's separately useful to be able to prove a given implementation is not screwed up, but often much harder.
As for norms - I have zero disagreement that rust has better norms overall, but yes, i've seen erosion. I would recommend, for example, trying to do some embedded rust programming if you want to see an area where no rust norms seem to exist under the covers.
Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".
It often feels like they just wanted to make the errors go away, so they put it in an unsafe block, and then didn't want to have to mark everything as unsafe to encapsulated it. I wish I was joking.
These libraries are often the de-facto way to achieve something (like bluetooth support). They are not getting better, they are getting copied and these pieces reused in chunks, causing the same elsewhere. and FWIW, none of these needed much if any unsafe at all (interacting with a bluetooth controller is not as unsafe as it seems. It is mostly just speaking to an embedded uart and issuing it some well-specified commands. So you probably need unsafe to deal with the send/receive, but not much else).
I can give you links and details privately, i don't really want to sort of publicly shame things for the sake of this argument :)
There are very well thought out and done embedded libraries mind you, but uh, they are the minority.
THis is not the only area, mind you, but it's an easy one to poke.
All norms fail over time, and you have to plan for it. You don't want to rely on them for things like "memory safety" :)
Good leadership, mentoring, etc makes them fail slower, but the thing that always causes failure is growth. Fast grow is even worse, but there are very few norms that scale and survive factors of 100x. THis is especially true when they are cultural norms.
I don't believe Rust will be the first to succeed at maintaining the level of norm it had 5-10 years ago, around this sort of thing, in the face of massive growth and scale.
(Though i have no doubt it can if it neither grows nor scales).
[1] How much global correctness is affected by local correctness depends on the property - there are some where some wrong local answers often change nothing because they are basically minimum(all local answers). There are some where a single wrong local answer makes it totally wrong because they are basically maximum(all local answers). The closer they are to simple union/intersection or min/max of local answers, the easier it is to compute global correctness, but the righter your local answers have to be :)
Because of encapsulation. I don't need to look far to see the effects of encapsulation (and abstraction) on computing.
I read your whole comment, but I really want to tighten this discussion up. I think the biggest thing I'm personally missing from coming over to your view of things is examples. In particular:
> Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".
Can you show me? If it's really "almost all," then you should even be able to point to a crate I've authored with a broken safe encapsulation. `regex-automata`, `jiff`, `bstr`, `byteorder`, `memchr` and `aho-corasick` all use `unsafe`. Can you find a point of unsoundness?
I don't want a library here or there. I am certain there are some libraries that are intentionally flouting Rust's norms here. So a couple of examples wouldn't be enough to convince me because I don't think a minority of people flouting Rust's norms is a big problem unless it can be shown that this minority is growing in size. What I want to see is evidence that this is both widespread and intentional. It's hard for me to believe that it is without me noticing.
If you want to do this privately, you can email: jamslam@gmail.com
https://doc.rust-lang.org/beta/
> Yeah, like C code normally would, just in the STD in this case.
Yes, in that manual checks are still needed. My point is unsafe code in rust is nowhere near safe and cannot be considered as safe without extensive analysis, no matter the language features used.
The answer is that its more ergonomic and easier to reason about. So while you can TECHNICALLY have "algebraic data types" in C i.e. "its just a tagged union so whats the big deal?" I prefer to use them in Rust than C, for whatever unknown reason...
I also don't want to spend my brain cells thinking about pointer provenance and which void* aliases with each other. I would rather spend it on something else, thank you very much.