Most active commenters
  • steveklabnik(18)
  • umanwizard(10)
  • pjmlp(8)
  • throwaway2037(8)
  • uecker(8)
  • (7)
  • timschmidt(7)
  • sunshowers(7)
  • no_wizard(6)
  • johnisgood(6)

←back to thread

Zlib-rs is faster than C

(trifectatech.org)
341 points dochtman | 295 comments | | HN request time: 4.714s | source | bottom
1. YZF ◴[] No.43381858[source]
I found out I already know Rust:

        unsafe {
            let x_tmp0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x10);
            xmm_crc0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x01);
            xmm_crc1 = _mm_xor_si128(xmm_crc1, x_tmp0);
            xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_crc0);
Kidding aside, I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library. At what point does it really stop mattering if this is C or Rust?

Presumably with inline assembly both languages can emit what is effectively the same machine code. Is the Rust compiler a better optimizing compiler than C compilers?

replies(30): >>43381895 #>>43381907 #>>43381922 #>>43381925 #>>43381928 #>>43381931 #>>43381934 #>>43381952 #>>43381971 #>>43381985 #>>43382004 #>>43382028 #>>43382110 #>>43382166 #>>43382503 #>>43382805 #>>43382836 #>>43383033 #>>43383096 #>>43383480 #>>43384867 #>>43385039 #>>43385521 #>>43385577 #>>43386151 #>>43386256 #>>43386389 #>>43387043 #>>43388529 #>>43392530 #
2. oneshtein ◴[] No.43381895[source]
> I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library.

What wrong with that?

3. Filligree ◴[] No.43381907[source]
The usual answer is: You only need to verify the unsafe blocks, not every block. Though 'unsafe' in Rust is actually even less safe than regular C, if a bit more predictable, so there's a crossover point where you really shouldn't have bothered.

The Rust compiler is indeed better than the C one, largely because of having more information and doing full-program optimisation. A `vec_foo = vec_foo.into_iter().map(...).collect::Vec<foo>`, for example, isn't going to do any bounds checks or allocate.

replies(2): >>43381960 #>>43384229 #
4. dietr1ch ◴[] No.43381922[source]
> I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library.

Which is exactly the point, other languages have unsafe implicitly sprinkled in every single line.

Rust tries to bound and explicitly delimit where unsafe code is to makes review and verification efforts precise.

5. datadeft ◴[] No.43381925[source]
I thought that the point of Rust is to have safe {} blocks (implicit) as a default and unsafe {} when you need the absolute maximum performance available. You can audit those few lines of unsafe code very easily. With C everything is unsafe and you can just forget to call free() or call it twice and you are done.
replies(2): >>43382225 #>>43382236 #
6. akx ◴[] No.43381928[source]
To quote the Rust book (https://doc.rust-lang.org/book/ch20-01-unsafe-rust.html):

  In addition, unsafe does not mean the code inside the
  block is necessarily dangerous or that it will definitely
  have memory safety problems: the intent is that as the
  programmer, you’ll ensure the code inside an unsafe block
  will access memory in a valid way.
Since you say you already know that much Rust, you can be that programmer!
replies(1): >>43382103 #
7. Aurornis ◴[] No.43381931[source]
Using unsafe blocks in Rust is confusing when you first see it. The idea is that you have to opt-out of compiler safety guarantees for specific sections of code, but they’re clearly marked by the unsafe block.

In good practice it’s used judiciously in a codebase where it makes sense. Those sections receive extra attention and analysis by the developers.

Of course you can find sloppy codebases where people reach for unsafe as a way to get around Rust instead of writing code the Rust way, but that’s not the intent.

You can also find die-hard Rust users who think unsafe should never be used and make a point to avoid libraries that use it, but that’s excessive.

replies(10): >>43381986 #>>43382095 #>>43382102 #>>43382323 #>>43385098 #>>43385651 #>>43386071 #>>43386189 #>>43386569 #>>43392018 #
8. pcwalton ◴[] No.43381934[source]
> Presumably with inline assembly both languages can emit what is effectively the same machine code. Is the Rust compiler a better optimizing compiler than C compilers?

rustc uses LLVM just as clang does, so to a first approximation they're the same. For any given LLVM IR you can mostly write equivalent Rust and C++ that causes the respective compiler to emit it (the switch fallthrough thing mentioned in the article is interesting though!) So if you're talking about what's possible (as opposed to what's idiomatic), the question of "which language is faster" isn't very interesting.

9. AlotOfReading ◴[] No.43381952[source]
The key difference is that there are invariants you can rely on as a user of the library, and they'll be enforced by the compiler outside the unsafe blocks. The corresponding C invariants mostly aren't enforced by the compiler. Worse, many C programmers will actively argue that some amount of undefined behavior is "fine".
10. johnisgood ◴[] No.43381960[source]
I have been told that "unsafe" affects code outside of that block, but hopefully steveklabnik may explain it better (again).

> isn't going to do any bounds checks or allocate.

You need to add explicit bounds check or explicitly allocate in C though. It is not there if you do not add it yourself.

replies(4): >>43382151 #>>43382226 #>>43382369 #>>43392828 #
11. jdefr89 ◴[] No.43381971[source]
Not to mention they link to libc.. All rust code does last I checked…
replies(1): >>43382088 #
12. einpoklum ◴[] No.43381985[source]
> At what point does it really stop mattering if this is C or Rust?

That depends. If, for you, safety is something relative and imperfect rather than absolute, guaranteed and reliable, then - the answer is that once you have the first non-trivial unsafe block that has not gotten standard-library-level of scrutiny. But if that's your view, you should not be all that starry-eyed about how "Rust is a safe language!" to begin with.

On the other hand, if you really do want to rely on Rust's strong safety guarantees, then the answer is: From the moment you use any library with unsafe code.

My 2 cents, anyway.

13. timschmidt ◴[] No.43381986[source]
Unsafe is a very distinct code smell. Like the hydrogen sulfide added to natural gas to allow folks to smell a gas leak.

If you smell it when you're not working on the gas lines, that's a signal.

replies(6): >>43382188 #>>43382239 #>>43384810 #>>43385163 #>>43385670 #>>43386705 #
14. ◴[] No.43382004[source]
15. koito17 ◴[] No.43382028[source]
The purpose of `unsafe` is for the compiler to assume a block of code is correct. SIMD intrinsics are marked as unsafe because they take raw pointers as arguments.

In safe Rust (the default), memory access is validated by the borrow checker and type system. Rust’s goal of soundness means safe Rust should never cause out-of-bounds access, use-after-free, etc; if it does, then there's a bug in the Rust compiler.

replies(2): >>43382647 #>>43382680 #
16. techjamie ◴[] No.43382088[source]
There is an option to not link to it for instances like OS writing and embedded. Writing everything in pure Rust without libc is entirely possible, even if an effort in losing sanity when you're reimplementing every syscall you need from scratch.

But even then, your code is calling out to kernel functions which are probably written in C or assembly, and therefore "dangerous."

Rust code safety is overhyped frequently, but reducing an attack surface is still an improvement over not doing so.

replies(2): >>43382740 #>>43385885 #
17. api ◴[] No.43382095[source]
The idea is that you can trivially search the code base for "unsafe" and closely examine all unsafe code, and unless you are doing really low-level stuff there should not be much of it. Higher level code bases should ideally have none.

It tends to be found in drivers, kernels, vector code, and low-level implementations of data structures and allocators and similar things. Not typical application code.

As a general rule it should be avoided unless there's a good reason to do it. But it's there for a reason. It's almost impossible to create a systems language that imposes any kind of rules (like ownership etc.) that covers all possible cases and all possible optimization patterns on all hardware.

replies(2): >>43382120 #>>43382568 #
18. chongli ◴[] No.43382102[source]
Isn't it the case that once you use unsafe even a single time, you lose all of Rust's nice guarantees? As far as I'm aware, inside the unsafe block you can do whatever you want which means all of the nice memory-safety properties of the language go away.

It's like letting a wet dog (who'd just been swimming in a nearby swamp) run loose inside your hermetically sealed cleanroom.

replies(16): >>43382176 #>>43382305 #>>43382448 #>>43382481 #>>43382485 #>>43382606 #>>43382685 #>>43382739 #>>43383207 #>>43383637 #>>43383811 #>>43384238 #>>43384281 #>>43385190 #>>43385656 #>>43387402 #
19. silisili ◴[] No.43382103[source]
I feel like C programmers had the same idea, and well, we see how that works out in practice.
replies(3): >>43382249 #>>43382631 #>>43386771 #
20. sesm ◴[] No.43382110[source]
Rust code emitter is Clang, the same one that Apple uses for C on their platforms. I wouldn't expect any miracles there, as Rust authors have zero influence over it. If any compiler is using any secret Clang magic, that would be Swift or Objective-C, since they are developed by Apple.
replies(1): >>43382210 #
21. timschmidt ◴[] No.43382120{3}[source]
To the extent that it's even possible to write bare metal microcontroller firmware in Rust without unsafe, as the embedded hal ecosystem wraps unsafe hardware interfaces in a modular fairly universal safe API.
22. LegionMammal978 ◴[] No.43382151{3}[source]
> I have been told that "unsafe" affects code outside of that block, but hopefully stevelabnik may explain it better (again).

Poorly-written unsafe code can have effects extending out into safe code. But correctly-written unsafe code does not have any effects on safe code w.r.t. memory safety. So to ensure memory safety, you just have to verify the correctness of the unsafe code (and any helper functions, etc., it depends on), rather than the entire codebase.

Also, some forms of unsafe code are far less dangeous than others in practice. E.g., most of the SIMD functions are practically safe to call in every situation, but they all have 'unsafe' slapped on them due to being intrinsics.

> You need to add explicit bounds check or explicitly allocate in C though. It is not there if you do not add it yourself.

Unfortunately, you do need to allocate a new buffer in C if you change the type of the elements. The annoying side of strict aliasing is that every buffer has a single type that's set in stone for all time. (Unless you preemptively use unions for everything.)

replies(1): >>43382462 #
23. xxs ◴[] No.43382166[source]
oddly enough that's not the most optimal version of crc32, e.g. it's not an avx512 variant.
24. timschmidt ◴[] No.43382176{3}[source]
It seems like you've got it backwards. Even unsafe rust is still more strict than C. Here's what the book has to say (https://doc.rust-lang.org/book/ch20-01-unsafe-rust.html)

"You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:

    Dereference a raw pointer
    Call an unsafe function or method
    Access or modify a mutable static variable
    Implement an unsafe trait
    Access fields of a union
It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You’ll still get some degree of safety inside of an unsafe block.

In addition, unsafe does not mean the code inside the block is necessarily dangerous or that it will definitely have memory safety problems: the intent is that as the programmer, you’ll ensure the code inside an unsafe block will access memory in a valid way.

People are fallible, and mistakes will happen, but by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."

replies(6): >>43382290 #>>43382353 #>>43382376 #>>43383159 #>>43383265 #>>43386165 #
25. cmrdporcupine ◴[] No.43382188{3}[source]
Look, no. Just go read the unsafe block in question. It's just SIMD intrinsics. No memory access. No pointers. It's unsafe in name only.

No need to get all moral about it.

replies(3): >>43382234 #>>43382266 #>>43382480 #
26. nindalf ◴[] No.43382210[source]
You’re conflating clang and LLVM.
replies(1): >>43382246 #
27. steveklabnik ◴[] No.43382225[source]
> unsafe {} when you need the absolute maximum performance available.

Unsafe code is not inherently faster than safe code, though sometimes, it is. Unsafe is for when you want to do something that is legal, but the compiler cannot understand that it is legal.

replies(1): >>43388181 #
28. pornel ◴[] No.43382226{3}[source]
Buggy unsafe blocks can affect code anywhere (through Undefined Behavior, or breaking the API contract).

However, if you verify that the unsafe blocks are correct, and the safe API wrapping them rejects invalid inputs, then they won't be able to cause unsafety anywhere.

This does reduce how much code you need to review for memory safety issues. Once it's encapsulated in a safe API, the compiler ensures it can't be broken.

This encapsulation also prevents combinatorial explosion of complexity when multiple (unsafe) libraries interact.

I can take zlib-rs, and some multi-threaded job executor (also unsafe internally), but I don't need to specifically check how these two interact. zlib-rs needs to ensure they use slices and lifetimes correctly, the threading library needs to ensure it uses correct lifetimes and type bounds, and then the compiler will check all interactions between these two libraries for me. That's like (M+N) complexity to deal with instead of (M*N).

29. kccqzy ◴[] No.43382234{4}[source]
By your line of reasoning, SIMD intrinsics functions should not be marked as unsafe in the first place. Then why are they marked as unsafe?
replies(4): >>43382276 #>>43382451 #>>43384972 #>>43385883 #
30. WD-42 ◴[] No.43382236[source]
It’s not about performance, it’s about undefined behavior.
31. mrob ◴[] No.43382239{3}[source]
There's no standard recipe for natural gas odorant, but it's typically a mixture of various organosulfur compounds, not hydrogen sulfide. See:

https://en.wikipedia.org/wiki/Odorizer#Natural_gas_odorizers

replies(2): >>43382271 #>>43386386 #
32. sesm ◴[] No.43382246{3}[source]
Yes, you are right, should be 'code emitter is LLVM, the same that Clang uses for C'
33. dijit ◴[] No.43382249{3}[source]
the problem in those cases is that C can’t help but be unsafe always.

People can write memory safe code, just not 100% of the time.

34. timschmidt ◴[] No.43382266{4}[source]
I don't read any moralizing in my previous comment. And it seems to mirror the relevant section in the book:

"People are fallible, and mistakes will happen, but by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."

I hope the SIMD intrinsics make it to stable soon so folks can ditch unnecessary unsafes if that's the only issue.

35. timschmidt ◴[] No.43382271{4}[source]
TIL!
36. cmrdporcupine ◴[] No.43382276{5}[source]
There's no standardization of simd in Rust yet, they've been sitting in nightly unstable for years:

https://doc.rust-lang.org/std/intrinsics/simd/index.html

So I suspect it's a matter of two things:

1. You're calling out to what's basically assembly, so buyer beware. This is basically FFI into C/asm.

2. There's no guarantee on what comes out of those 128-bit vectors after to follow any sanity or expectations, so... buyer beware. Same reason std::mem::transmute is marked unsafe.

It's really the weakest form of unsafe.

Still entirely within the bounds of a sane person to reason about.

replies(3): >>43382389 #>>43382440 #>>43385419 #
37. pclmulqdq ◴[] No.43382290{4}[source]
The way I have heard it described that I think is a bit more succinct is "unsafe admits undefined behavior as though it was safe."
38. CooCooCaCha ◴[] No.43382305{3}[source]
I wouldn’t go that far. Bevy for example, uses unsafe internally but is VERY strict about it, and every use of unsafe requires a comment explaining why the code is safe.

In other words, unsafe works if you use it carefully and keep it contained.

replies(1): >>43382540 #
39. colonwqbang ◴[] No.43382323[source]
Can’t rust do safe simd? This is just vectorised multiplication and xor, but it gets labelled as unsafe. I imagine most code that wants to be fast would use simd to some extent.
replies(1): >>43382443 #
40. Someone ◴[] No.43382353{4}[source]
But “Dereference a raw pointer”, in combination with the ability to create raw pointers pointing to arbitrary memory addresses (that, you can do even in safe rust) allows you to write arbitrary memory from unsafe rust.

So, in theory, unsafe rust opens the floodgates. In practice, though, you can use small fragments of unsafe code that programmers can fairly easily check to be safe.

Then, once you’ve convinced yourself that those fragments are safe, you can be assured that your whole program is safe (using ‘safe’ in the rust sense, of course)

So, there may be some small islands of unsafe code that require extra attention from the programmer, but that should be just a tiny fraction of all lines, and you should be able to verify those islands in isolation.

replies(1): >>43382404 #
41. steveklabnik ◴[] No.43382369{3}[source]
> I have been told that "unsafe" affects code outside of that block, but hopefully stevelabnik may explain it better (again).

It's due to a couple of different things interacting with each other: unsafe relies on invariants that safe code must also uphold, and that the privacy boundary in Rust is the module.

Before we get into the unsafe stuff, I want you to consider an example. Is this Rust code okay?

    struct Foo {
       bar: usize,
    }
    
    impl Foo {
        fn set_bar(&mut self, bar: usize) {
            self.bar = bar;
        }
    }
No unsafe shenanigans here. This code is perfectly safe, if a bit useless.

Let's talk about unsafe. The canonical example of unsafe code being affected outside of unsafe itself is the implementation of Vec<T>. Vecs look something like this (the real code is different for reasons that don't really matter in this context):

    struct Vec<T> {
       ptr: *mut T,
       len: usize,
       cap: usize,
    }
The pointer is to a bunch of Ts in a row, the length is the current number of Ts that are valid, and the capacity is the total number of Ts. The length and the capacity are different so that memory allocation is amortized; the capacity is always greater than or equal to the length.

That property is very important! If the length is greater than the capacity, when we try and index into the Vec, we'd be accessing random memory.

So now, this function, which is the same as Foo::set_bar, is no longer okay:

    impl<T> Vec<T> {
        fn set_len(&mut self, len: usize) {
            self.len = len;
        }
    }
This is because the unsafe code inside of other methods of Vec<T> need to be able to rely on the fact that len <= capacity. And so you'll find that Vec<T>::set_len in Rust is marked as unsafe, even though it doesn't contain unsafe code. It still requires judicious use of to not introduce memory unsafety.

And this is why the module being the privacy boundary matters: the only way to set len directly in safe Rust code is code within the same privacy boundary as the Vec<T> itself. And so, that's the same module, or its children.

42. uecker ◴[] No.43382376{4}[source]
This description is still misleading. The preconditions for the correctness of an unsafe block can very much depend on the correctness of the code outside and it is easy to find Rust bugs where exactly this was the cause. This is very similar where often C out of bounds accesses are caused by some logic error elsewhere. Also an unsafe block has to maintain all the invariants the safe Rust part needs to maintain correctness.
replies(4): >>43382514 #>>43382566 #>>43382585 #>>43383088 #
43. pclmulqdq ◴[] No.43382389{6}[source]
> they've been sitting in nightly unstable for years

So many very useful features of Rust and its core library spend years in "nightly" because the maintainers of those features don't have the discipline to see them through.

replies(3): >>43382419 #>>43383440 #>>43385204 #
44. steveklabnik ◴[] No.43382404{5}[source]
> allows you

This is where the rubber hits the road. Rust does not allow you to do this, in the sense that this is possibly undefined behavior. That "possibly" is why the compiler allows you to write this code, because by saying "unsafe", you are promising that this specific arbitrary address is legal for you to write to. But that doesn't mean that it's always legal to do so.

replies(1): >>43382457 #
45. cmrdporcupine ◴[] No.43382419{7}[source]
simd and allocator_api are the two that irritate me enough to consider a different language for future systems dev projects.

I don't have the personality or time to wade into committee type work, so I have no idea what it would take to get those two across the finish line, but the allocator one in particular makes me question Rust for lower level applications. I think it's just not going to happen.

If Zig had proper ADTs and something equivalent to borrow checker, I'd be inclined to poke at it more.

replies(1): >>43385115 #
46. steveklabnik ◴[] No.43382440{6}[source]
> There's no standardization of simd in Rust yet

Of safe SIMD, but some stuff in core::arch is stabilized. Here's the first bit called in the example of the OP: https://doc.rust-lang.org/core/arch/x86/fn._mm_clmulepi64_si...

47. steveklabnik ◴[] No.43382443{3}[source]
It's still nightly-only.
48. SkiFire13 ◴[] No.43382448{3}[source]
You lose the nice guarantees inside the `unsafe` block, but the point is to write a sound and safe interface over it, that is an API that cannot lead to UB no matter how other safe code calls it. This is basically the encapsulation concept, but for safety.

To continue the analogy of the dog, you let the dog get wet (=you use unsafe), but you put a cleaning room (=the sound and safe API) before your sealed room (=the safe code world)

49. CryZe ◴[] No.43382451{5}[source]
They are in the process of marking them safe, which is enabled through the target_feature 1.1 RFC.

In fact, it has already been merged two weeks ago: https://github.com/rust-lang/stdarch/pull/1714

The change is already visible on nightly: https://doc.rust-lang.org/nightly/core/arch/x86/fn._mm_xor_s...

Compared to stable: https://doc.rust-lang.org/core/arch/x86/fn._mm_xor_si128.htm...

So this should be stable in 1.87 on May 15 (Rust's 10 year anniversary since 1.0)

50. timschmidt ◴[] No.43382457{6}[source]
The compiler won't allow you to compile such code without the unsafe. The unsafe is *you* promising the compiler that *you* have checked to ensure that the address will always be legal. So that the compiler will allow you to compile the code.
replies(1): >>43382475 #
51. uecker ◴[] No.43382462{4}[source]
C has type-changing stores. If you store to a buffer with a new type, it has the new type. Clang does not implement this correctly though, but GCC does.
52. steveklabnik ◴[] No.43382475{7}[source]
Right, I'm saying "allow" has two different connotations, and only one of them, the one that you're talking about, applies.
replies(1): >>43382596 #
53. SkiFire13 ◴[] No.43382480{4}[source]
SIMD intrinsics are unsafe because they are available only under some CPU features.
54. timeon ◴[] No.43382481{3}[source]
> unsafe even a single time, you lose all of Rust's nice guarantees

Not sure why would one resulted in all. One of Rust's advantages is the clear boundary between safe/unsafe.

replies(1): >>43387667 #
55. wongarsu ◴[] No.43382485{3}[source]
If your unsafe code violates invariants it was supposed to uphold, that can wreck safety properties the compiler was trying to uphold elsewhere. If you can achieve something without unsafe you definitely should (safe, portable simd is available in rust nightly, but it isn't stable yet).

At the same time, unsafe doesn't just turn off all compiler checks, it just gives you tools to go around them, as well as tools that happen to go around them because of the way they work. Rust unsafe is this weird mix of being safer than pure C, but harder to grasp; with lots of nuanced invariants you have to uphold. If you want to ensure your code still has all the nice properties the compiler guarantees (which go way beyond memory safety) you would have to carefully examine every unsafe block. Which few people do, but you generally still end up with a better status quo than C/C++ where any code can in principle break properties other code was trying to uphold.

56. Shorel ◴[] No.43382503[source]
Awesome find. This really means:

Assembly language faster than C. And faster than Rust. Assembly can be very fast.

57. iknowstuff ◴[] No.43382514{5}[source]
No. Correctness of code outside unsafe depends on correctness inside those blocks, not the other way around
replies(1): >>43382600 #
58. tonyhart7 ◴[] No.43382540{4}[source]
right, the point is raising awareness and assumption its not 100 and 0 problem
59. dwattttt ◴[] No.43382566{5}[source]
It's true, but I think it's only fair if you hold Rust to this analysis, other languages should too; the scrutiny you're implying you need in an unsafe Rust block needs to be applied to all C code, because all C code could depend on code anywhere else for its safety characteristics.

In practice (in both languages) you check what the actual unsafe code does (or "all" code in C's case), note code that depends on external actors for safety (it's not all C code, nor is it all unsafe Rust blocks), and check their callers (and callers callers, etc).

replies(1): >>43382684 #
60. formerly_proven ◴[] No.43382568{3}[source]
My understanding from Aria Beingessner's and some other writings is that unsafe{} rust is significantly harder to get right in "non-trivial cases" than C, because the semantics are more complex and less specified.
replies(2): >>43382970 #>>43383545 #
61. lambda ◴[] No.43382585{5}[source]
So, it's true that unsafe code can depend on preconditions that need to be upheld by safe code.

But using ordinary module encapsulation and private fields, you can scope the code that needs to uphold those preconditions to a particular module.

So the "trusted computing base" for the unsafe code can still be scoped and limited, allowing you to reduce the amount of code you need to audit and be particularly careful about for upholding safety guarantees.

Basically, when writing unsafe code, the actual unsafe operations are scoped to only the unsafe blocks, and they have preconditions that you need to scope to a particular module boundary to ensure that there's a limited amount of code that needs to be audited to ensure it upholds all of the safety invariants.

Ralf Jung has written a number of good papers and blog posts on this topic.

replies(1): >>43382721 #
62. timschmidt ◴[] No.43382596{8}[source]
I gotcha. I misread and misunderstood. Yes, we agree.
63. sunshowers ◴[] No.43382606{3}[source]
What language is the JVM written in?

All safe code in existence running on von Neumann architectures is built on a foundation of unsafe code. The goal of all memory-safe languages is to provide safe abstractions on top of an unsafe core.

replies(3): >>43385347 #>>43385422 #>>43386156 #
64. sunshowers ◴[] No.43382631{3}[source]
No, C lacks encapsulation of unsafe code. This is very important. Encapsulation is the only way to scale local reasoning into global correctness.
replies(2): >>43385092 #>>43387548 #
65. no_wizard ◴[] No.43382647[source]
How do we know if Rust is safe unless Rust is written purely in safe Rust?

Is that not true? Even validators have bugs or miss things no?

replies(2): >>43382727 #>>43384836 #
66. int_19h ◴[] No.43382680[source]
Out of curiosity, why do they take raw pointers as arguments, rather than references?
replies(1): >>43382894 #
67. uecker ◴[] No.43382684{6}[source]
What is true is that there are more operations in C which can cause undefined behavior and those are more densely distributed over the C code, making it harder to screen for undefined behavior. This is true and Rust certainly has an advantage, but it not nearly as big of an advantage as the "Rust is safe" (please do not look at all the unsafe blocks we need to make it also fast!) and "all C is unsafe" story wants you to believe.
replies(4): >>43382883 #>>43383190 #>>43383793 #>>43385047 #
68. janice1999 ◴[] No.43382685{3}[source]
Claiming unsafe invalidates "all of the nice memory-safety properties" is like saying having windows in your house does away with all the structural integrity of your walls.

There's even unsafe usage in the standard library and it's used a lot in embedded libraries.

replies(1): >>43383773 #
69. uecker ◴[] No.43382721{6}[source]
And you think one can not modularize C code and encapsulate critical buffer operations in much safer APIs? One can, the problem is that a lot of legacy C code was not written this way. Also lot of newly written C code is not written this way, but the reason is often that people cut corners when they need to get things done with limited time and resources. The same you will see with Rust.
replies(4): >>43383131 #>>43383951 #>>43384869 #>>43386840 #
70. steveklabnik ◴[] No.43382727{3}[source]
> Even validators have bugs

Yep! For example, https://github.com/Speykious/cve-rs is an example of a bug in the Rust compiler, which allows something that it shouldn't. It's on its way to being fixed.

> or miss things no?

This is the trickier part! Yes, even proofs have axioms, that is, things that are accepted without proof, that the rest of the proof is built on top of. If an axiom is incorrect, so is the proof, even though we've proven it.

71. vlovich123 ◴[] No.43382739{3}[source]
You only lose those guarantees if and only if the code within the unsafe block violates the rules of the Rust language.

Normally in safe code you can’t violate the language rules because the compiler enforces various rules. In unsafe mode, you can do several things the compiler would normally prevent you from doing (e.g. dereferencing a naked pointer). If you uphold all the preconditions of the language, safety is preserved.

What’s unfortunate is that the rules you are required to uphold can be more complex than you might anticipate if you’re trying to use unsafe to write C-like code. What’s fortunate is that you rarely need to do this in normal code and in SIMD which is what the snippet is representing there’s not much danger of violating the rules.

72. jdefr89 ◴[] No.43382740{3}[source]
I agree and binary exploitation/Vulnerability Research is my area of expertise.. The whole "Lets port everything to Rust" is so misguided. Binary exploitation has already gotten 20x harder than say ten years ago.. Even so.. Most big breaches happen because people reuse their password or just give it out... Nation States are pretty much the only parties capable of delivering full kill chains that exploit, say chrome... That is why I moved to the embedded space.. Still so insecure...
replies(2): >>43383936 #>>43386119 #
73. bitwize ◴[] No.43382805[source]
You can use 'unsafe' blocks to delineate places on the hot path where you need to take the limiters off, then trust that the rest of the code will be safe. In C, all your code is unsafe.

We will see more and more Rust libraries trounce their C counterparts in speed, because Rust is more fun to work in because of the above. Rust has democratized high-speed and concurrent systems programming. Projects in it will attract a larger, more diverse developer base -- developers who would be loath to touch a C code base for (very justified) fear of breaking something.

74. dzaima ◴[] No.43382836[source]
Looks like as of 2 weeks ago the unsafe block should no longer be required: https://github.com/rust-lang/stdarch/pull/1714

..at least outside of loads/stores. From a bit of looking at the code though it seems like a good amount of those should be doable in a safe way with some abstractions.

75. iknowstuff ◴[] No.43382849{7}[source]
tf are you talking about
replies(2): >>43382906 #>>43382911 #
76. dwattttt ◴[] No.43382883{7}[source]
The places where undefined behaviour can occur are also limited in scope; you insist that that part isn't true, because operations outside those unsafe blocks can impact their safety.

That's only true at the same level of scrutiny as "all C operations can cause undefined behaviour, regardless of what they are", which I find similarly shallow.

77. steveklabnik ◴[] No.43382894{3}[source]
From the RFC: https://rust-lang.github.io/rfcs/2325-stable-simd.html

> The standard library will not deviate in naming or type signature of any intrinsic defined by an architecture.

I think this makes sense, just like any other intrinsic: unsafe to use directly, but with safe wrappers.

I believe that there are also some SIMD things that would have to inherently take raw pointers, as they work on pointers that aren't aligned, and/or otherwise not valid for references. In theory you could make only those take raw pointers, but I think the blanket policy of "follow upstream" is more important.

replies(1): >>43390622 #
78. steveklabnik ◴[] No.43382906{8}[source]
They are (rudely) talking about https://news.ycombinator.com/item?id=43382369
79. dwattttt ◴[] No.43382911{8}[source]
In a more helpful framing: safe Rust code doesn't need to worry about its own correctness, it just is.

Unsafe code can be incorrect (or unsound), and needs to be careful about it. Part of being careful is that safe code can call the unsafe code in a way that triggers that unsoundness; in that way, safe code can cause undefined behaviour in unsafe code.

It's not always the case that this is possible; there are unsafe blocks that don't need to depend on safe code for its correctness.

80. dwattttt ◴[] No.43382970{4}[source]
It's hard to compare. Rust has stricter requirements than C, but looser requirements don't mean easier: ever bit shifted by a variable amount? Hope you never relied on shifting "entirely" out of a variable zeroing it.
81. gf000 ◴[] No.43383033[source]
Rust's borrow checker still checks within unsafe blocks, so unless you are only operating with raw pointers (and not accessing certain references as raw pointers in some small, well-defined blocks) across the whole program it will be significantly more safe than C. Especially given all the other language benefits, like a proper type system that can encode a bunch of invariants, no footguns at every line/initialization/cast, etc.
replies(1): >>43383145 #
82. gf000 ◴[] No.43383088{5}[source]
This is technically correct, but a bit pedantic.

Sure, you can technically just write your own vulnerability for your own program and inject it at an unsafe and see the whole world crumble... but the exact same is true for any form of FFI calls in any language. Is Java memory safe? Yeah, just because I can grab a random pointer and technically break anything I want won't change that.

The fact that a memory vulnerability error may either appear at no place at all OR at the couple hundred lines of code thorough the whole project is a night and day difference.

83. asveikau ◴[] No.43383096[source]
> At what point does it really stop mattering if this is C or Rust?

If I read TFA correctly, they came up with a library that is API compatible with the C one, but they've measured to be faster.

At that point I think in addition to safety benefits in other parts of the library (apart from unsafe micro optimizations as quoted), what they're leveraging is better compiler technology. Intuitively, I start to assume that the rust compiler can perhaps get away with more optimizations that might not be safe to assume in C.

84. gf000 ◴[] No.43383131{7}[source]
Even innocent looking C code can be chock-full of UBs that can invalidate your "local reasoning" capabilities. So, not even close.
replies(1): >>43383379 #
85. acdha ◴[] No.43383145[source]
Yes. I think it’s easy to underestimate how much the richer language and library ecosystem chip away at the attack surface area. So many past vulnerabilities have been in code which isn’t dealing with low-level interfaces or weird performance optimizations and wouldn’t need to use unsafe. There’ve been so many vulnerabilities in crypto code which weren’t the encryption or hashing algorithms but things like x509/ASN parsing, logging, or the kind of option/error handling logic a Rust programmer would use the type system to validate.
86. onnimonni ◴[] No.43383159{4}[source]
Would someone with more experience be able to explain to me why can't these operations be "safe"? What is blocking rust from producing the same machine code in a "safe" way?
replies(4): >>43383264 #>>43383268 #>>43383285 #>>43383292 #
87. gf000 ◴[] No.43383190{7}[source]
Rust is plenty fast, in fact there are countless examples of safe rust that will trivially beat out C in performance due to no aliasing, enabling better vectorization among others. Let alone being simply a more expressive language and allowing writing better optimizations (e.g. small strings, vs the absolutely laughable c-strings that perform terribly, but also you can actually get away with sharing more stuff in memory vs doing defensive copies everywhere because it is safe to do so, etc)

And there is not many things we have statistics on in CS, but memory vulnerabilities being absolutely everywhere in unsafe languages, and Rust cleaning up the absolute majority of them even when only the new parts are written in Rust are some of the few we do know, based on actual, real life projects at Google/Microsoft among others.

A memory safe low-level language is as novel as it gets. Rust is absolutely not just hype, it actually delivers and you might want to get on with the times.

replies(1): >>43385295 #
88. pdimitar ◴[] No.43383207{3}[source]
Where did you even get that weird extreme take from?

O_o

89. vlovich123 ◴[] No.43383264{5}[source]
Those specific functions are compiler builtin vector intrinsics. The main reason is that they can easily read past ends of arrays and have type safety and aliasing issues.

By the way, the rust compiler does generate such code because under the hood LLVM runs an autovectorizer when you turn on optimizations. However, for the autovectorizer to do a good job you have to write code in a very special way and you have no way of controlling whether or not it kicked in and once it did that it did a good job.

There’s work on creating safe abstractions (that also transparently scale to the appropriate vector instruction), but progress on that has felt slow to me personally and it’s not available outside nightly currently.

replies(1): >>43385330 #
90. rybosome ◴[] No.43383265{4}[source]
I believe the post you are replying to was referring to the fact that you could take actions in that unsafe block that would compromise the guarantees of rust; eg you could do something silly, leave the unsafe block, then hit an “impossible” condition later in the program.

A simple example might be modifying a const value deep down in some class, where it only becomes apparent later in the program’s execution. Hence their analogy of the wet dog in a clean room - whatever beliefs you have about the structure of memory in your entire program, and guaranteed by the compiler, could have been undone by a rogue unsafe.

replies(1): >>43396097 #
91. ◴[] No.43383268{5}[source]
92. NobodyNada ◴[] No.43383285{5}[source]
Rust's raw pointers are more-or-less equivalent to C pointers, with many of the same types of potential problems like dangling pointers or out-of-bounds access. Rust's references are the "safe" version of doing pointer operations; raw pointers exist so that you can express patterns that the borrow checker can't prove are sound.

Rust encourages using unsafe to "teach" the language new design patterns and data structures; and uses this heavily in its standard library. For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.

Think of unsafe not as "this code is unsafe", but as "I've proven this code to be safe, and the borrow checker can rely on it to prove the safety of the rest of my program."

replies(1): >>43385326 #
93. adgjlsfhk1 ◴[] No.43383292{5}[source]
often the unsafe code is at the edges of the type system. e.g. sometimes the proof of safety is that someone read the source code of the c library that you are calling out to. it's not useful to think of machine code as safe or unsafe. safety often refers to whether the types of your data match the lifetime dataflow.
94. wavemode ◴[] No.43383379{8}[source]
Care to share an example?
replies(3): >>43383437 #>>43383963 #>>43385097 #
95. capitainenemo ◴[] No.43383437{9}[source]
sorting floats with NaN ? almost anything involving threading and mutation where people either don't realise how important locks are, or don't realise their code has suddenly been threaded?
96. NobodyNada ◴[] No.43383440{7}[source]
Before I started working with Rust, I spent a lot of time using Swift for systems-y/server-side code, outside of the Apple ecosystem. There is a lot I like about that language, but one of the biggest factors that drove me away was just how fast the Apple team was to add more and more compiler-magic features without considering whether they were really the best possible design. (One example: adding compiler-magic derived implementations of specific protocols instead of an extensible macro system like Rust has.) When these concerns were raised on the mailing lists, the response from leadership was "yes, something like that would be better in the long run, but we want to ship this now." Or even in one case, "yes, that tweak to the design would be better, but we already showed off the old design at the WWDC keynote and we don't want to break code we put in a keynote slide."

When I started working in Rust, I'd want some feature or function, look it up, and find it was unstable, sometimes for years. This was frustrating at first, but then I'd go read the GitHub issue thread and find that there was some design or implementation concern that needed to be overcome, and that people were actively working on it and unwilling to stabilize the feature until they were sure it was the best possible design. And the result of that is that features that do get stabilized are well thought out, generalize, and compose well with everything else in the language.

Yes, I really want things like portable SIMD, allocators, generators, or Iterator::intersperse. But programming languages are the one place I really do want perfect to be the enemy of good. I'd rather it take 5+ years to stabilize features than for us to end up with another Swift or C++.

replies(2): >>43383716 #>>43384703 #
97. ◴[] No.43383480[source]
98. NobodyNada ◴[] No.43383545{4}[source]
This is definitely true right now, but I don't think it will always be the case.

Unsafe Rust is currently extremely underspecified and underdocumented, but it's designed to be far more specifiable than C. For example: aliasing rules. When and how you're allowed to alias references in unsafe code is not at all documented and under much active discussion; whereas in C pointer aliasing rules are well defined but also completely insane (casting pointers to a different type in order to reinterpret the bytes of an object is often UB even in completely innocuous cases).

Once Rust's memory model is fully specified and written down, unsafe Rust is trying to go for something much simpler, more teachable, and with less footguns than C.

Huge props to Ralf Jung and the opsem team who are working on answering these questions & creating a formal specification: https://github.com/rust-lang/unsafe-code-guidelines/issues

99. xboxnolifes ◴[] No.43383637{3}[source]
If you have 1 unsafe block, and you have a memory related crash/issue, where in your Rust code do you think the root cause is located?

This isn't a wet dog in a cleanroom. This is cleanroom complex that has a very small outhouse that is labeled as dangerous.

100. grandiego ◴[] No.43383716{8}[source]
> the response from leadership was "yes, something like that would be better in the long run, but we want to ship this now."

Sounds like the Rust's async story.

replies(2): >>43383751 #>>43384178 #
101. steveklabnik ◴[] No.43383751{9}[source]
Async went through years of work before being stabilized. This isn't true.
102. benjiro ◴[] No.43383773{4}[source]
Where are you more likely get a burglar enter your home? Windows ... Where are you more likely to develop cracks in your walls? Windows ... Where are you more likely to develop leaks? Windows (especially roof windows!)...

Sorry but horrible comparison ;)

If you need to rely on unsafe in a memory-safe language for performance reasons, then there is a issue with the language compiler at that point, that needs to be fixed. Simple as that.

The whole memory-safety is the bread and butter of the language, the moment you start to bypass it for faster memory operations, you can start doing the same in any other language. I mean, your literally bypassing the main selling point of the language. \_00_/

replies(2): >>43383838 #>>43384027 #
103. pdimitar ◴[] No.43383793{7}[source]
You sound pretty biased, gotta tell you. That snark is not helping any argument you think you might be doing -- and you are not doing any; you are kind of just making fun of Rust, which is pretty boring and uninformative for any reader.

From my past experiences with Rust, the team never had to think about data race once, or mutable volatile globals. And we all there suffered from those decades ago with C and sometimes C++ as well.

You like those and don't want to migrate? More power to ya! But badmouthing Rust with what seem fairly uninformed comments is just low. Inform yourself first.

104. LoganDark ◴[] No.43383811{3}[source]
> Isn't it the case that once you use unsafe even a single time, you lose all of Rust's nice guarantees?

No, not even close. You only lose Rust's safety guarantees when your unsafe code causes Undefined Behavior. Unsafe code that can be made to cause UB from Safe Rust is typically called unsound, and unsafe code that cannot be made to cause UB from Safe Rust is called sound. As long as your unsafe code is sound, then it does not break any of Rust's guarantees.

For example, unsafe code can still use slices or references provided by Safe Rust, because those are always guaranteed to be valid, even in an unsafe block. However, if from inside that unsafe block you then go on to manufacture an invalid slice or reference using unsafe functions, that is UB and you lose Rust's safety guarantees because of the UB.

105. pdimitar ◴[] No.43383838{5}[source]
> If you need to rely on unsafe in a memory-safe language for performance reasons, then there is a issue with the language compiler at that point, that needs to be fixed. Simple as that.

It actually means "Rust needs to interface with many other systems that are not as stringent as it". Your interpretation has nothing to do with what's actually going on and I am surprised you misinterpreted the situation as hugely as you did.

...And even if everything was written in Rust, `unsafe` would still be needed because the lower you get [to the kernel] you get more and more non-determinism at places.

This "all or nothing" attitude is boring and tiring. We all wish things were super simple, black and white, and all-or-nothing. They are not.

106. pdimitar ◴[] No.43383936{4}[source]
> The whole "Lets port everything to Rust" is so misguided.

Well, good thing that nobody sane is saying that then.

107. nicoburns ◴[] No.43383951{7}[source]
You're a lot more limited more limited to the kinds of APIs you can safely encapsulate in C. For example, you can't safely encapsulate an interface that shares memory between the library and the caller in C. So you're forced into either:

- Exposing an unsafe API and relying on the caller to manually uphold invariants

- Doing things like defensive copying at a performance cost

In many cases Rust gives you the best of both worlds: sharing memory liberally while still having the compiler enforce correctness.

replies(1): >>43392262 #
108. masfuerte ◴[] No.43383963{9}[source]

   int average(int x, int y) {
       return (x+y)/2;
   }
replies(3): >>43385221 #>>43392246 #>>43445900 #
109. unrealhoang ◴[] No.43384027{5}[source]
So static typing is stupid because at the end of the line your program must interface with stream of untyped bits (i/o)?

Once you can internalize that you could unlock the power of encapsulation.

110. NobodyNada ◴[] No.43384178{9}[source]
Rust's async model was shipped as an MVP, not in the sense of "this is a bad design and we just want to ship it"; but rather, "we know this is the first step of the eventual design we want, so we can commit to stabilizing these parts of it now while we work on the rest." There's ongoing work to bring together the rest of the pieces and ergonomics on top of that foundational model; async closures & trait methods were recently stabilized, and work towards things like pin ergonomics & simplifying cheap clones like Rc are underway.

Rust uses this strategy of minimal/incremental stabilization quite often (see also: const generics, impl Trait); the difference between this and what drove me away from Swift is that MVPs aren't shipped unless it's clear that the design choices being made now will still be the right choices when the rest of the feature is ready.

replies(1): >>43384296 #
111. mwkaufma ◴[] No.43384229[source]
Won't the final result allocate?
replies(1): >>43384604 #
112. EnnEmmEss ◴[] No.43384238{3}[source]
Jason Ordendorff's talk [1] was probably the first time I truly grokked the concept of unsafe in Rust. The core idea behind unsafe in Rust is not to provide an escape from the guarantees provided by rust. It's to isolate the places where you have no choice but to break the guarantees and rigorously code/test the boundaries there so that anything wrapping the unsafe code can still provide the guarantees.

[1]: https://www.youtube.com/watch?v=rTo2u13lVcQ

113. andyferris ◴[] No.43384281{3}[source]
Rust isn't the only memory-safe language.

As soon as you start playing with FFI and raw pointers in Python, NodeJS, Julia, R, C#, etc you can easily loose the nice memory-safety properties of those languages - create undefined behavior, segfaults, etc. I'd say Rust is a lot nicer for checking unsafe correctness than other memory-safe languages, and also makes it easier to dip down to systems-level programming, yet it seems to get a lot of hate for these features.

replies(1): >>43386111 #
114. cmrdporcupine ◴[] No.43384296{10}[source]
IMO shipping async without a standardized API for basic common async facilities (like thread spawning, file/network I/O) was a mistake and basically means that tokio has eaten the whole async side of the language.

Why define runtime independence as a goal, but then make it impossible to write runtime agnostic crates?

(Well, there's the "agnostic" crate at least now)

replies(1): >>43384821 #
115. steveklabnik ◴[] No.43384604{3}[source]
It won't allocate in this case because it's still a vec of foo at the end, so we know it has enough space. If it were a different type, it may or may not allocate, depending on if it had enough capacity.
116. pclmulqdq ◴[] No.43384703{8}[source]
My personal opinion is that if you want to contribute a language feature, shit or get off the pot. Leaving around a half-baked solution actually raises the required effort for someone who isn't you to add that feature (or an equivalent) because they now have to either (1) ramp up on the spaghetti you wrote or (2) overcome the barrier of explaining why your thing isn't good enough. Neither of those two things are fun (which is important since writing language features is volunteer work) and those things come in the place of doing what is actually fun, which is writing the relevant code.

The fact that the Rust maintainers allow people to put in half-baked features before they are fully designed is the biggest cultural failing of the language, IMO.

replies(1): >>43384769 #
117. dralley ◴[] No.43384769{9}[source]
>The fact that the Rust maintainers allow people to put in half-baked features before they are fully designed is the biggest cultural failing of the language, IMO.

In nightly?

Hard disagree. Letting people try things out in the real world is how you avoid half-baked features. Easy availability of nightly compilers with unstable features allows way more people to get involved in the pre-stabilization polishing phase of things and raise practical concerns instead of theoretical ones.

C++ takes the approach of writing and nitpicking whitepapers for years before any implementations are ready and it's hard to see how that has led to better outcomes relatively speaking.

replies(1): >>43384818 #
118. throwaway150 ◴[] No.43384810{3}[source]
> Like the hydrogen sulfide added to natural gas to allow folks to smell a gas leak.

I am 100% sure that the smell they add to natural gas does not smell like rotten eggs.

replies(2): >>43385005 #>>43385686 #
119. pclmulqdq ◴[] No.43384818{10}[source]
Yeah, we're going to have to agree to disagree on the C++ flow (really the flow for any language that has a written standard) being better. That flow is usually:

1. Big library/compiler does a thing, and people really like it

2. Other compilers and libraries copy that thing, sometimes putting their own spin on it

3. All the kinks get worked out and they write a white paper

4. Eventually the thing becomes standard

That way, everything in the standard library is something that is fully-thought-out and feature-complete. It also gives much more room for competing implementations to be built and considered before someone stakes out a spot in the standard library for their thing.

replies(2): >>43384839 #>>43386079 #
120. dralley ◴[] No.43384821{11}[source]
>IMO shipping async without a standardized API for basic common async facilities (like thread spawning, file/network I/O) was a mistake and basically means that tokio has eaten the whole async side of the language.

I would argue that it's the opposite of a mistake. If you standardize everything before the ecosystem gets a chance to play with it, you risk making mistakes that you have to live with in perpetuity.

replies(1): >>43385278 #
121. TheDong ◴[] No.43384836{3}[source]
And while we're in the hypothetical extreme world somewhat separated from reality, a series of solar flares could flip a memory bit and all the error-correction bits in my ECC ram at once to change a pointer in memory, causing my safe rust to do an out of bounds write.

Until we design perfectly correct computer hardware, processors, and a sun which doesn't produce solar radiation, we can't rely on totally uniform correct execution of our code, so we should give up.

The reality is that while we can't prove the rust compiler is safe, we can keep using it and diligently fix any counter-examples, and that's good enough in practice. Over in the real world, where we can acknowledge "yes, it is impossible to prove the absence of all bugs" and simultaneously say "but things sure seem to be working great, so we can get on with life and fix em if/when they pop up".

replies(1): >>43385259 #
122. dralley ◴[] No.43384839{11}[source]
>That way, everything in the standard library is something that is fully-thought-out and feature-complete

Are C++ features really that much better thought out? Modules were "standardized" half a decade ago, but the list of problems with actually using them in practice is still pretty damn long to the point where adoption is basically non-existent.

I'm not going to pretend to be nearly as knowledgeable about C++ as Rust, but it seems like most new C++ features I hear about are a bit janky or don't actually fit that well with the rest of the language. Something that tends to happen when designing things in an ivory tower without testing them in practice.

replies(1): >>43384882 #
123. cbarrick ◴[] No.43384867[source]
Others have already addressed the "unsafe" smell.

I think the bigger point here is that doing SIMD in Rust is still painful.

There are efforts like portable-simd [1] to make this better, but in practice, many people are dropping down to low-level SIMD intrinsics and/or inline assembly, which are no better than their C equivalents.

[1]: https://github.com/rust-lang/portable-simd

124. lambda ◴[] No.43384869{7}[source]
There is no distinction between safe and unsafe code in C, so it's not possible to make that same distinction that you can in Rust.

And even if you try to provide some kind of safer abstraction, you're limited by the much more primitive type system, that can't distinguish between owned types, unique borrows, and shared borrows, nor can it distinguish thread safety properties.

So you're left to convention and documentation for that kind of information, but nothing checking that you're getting it right, making it easy to make mistakes. And even if you get it right at first, a refactor could change your invariants, and without a type system enforcing them, you never know until someone comes along with a fuzzer and figures out that they can pwn you

replies(1): >>43392234 #
125. pclmulqdq ◴[] No.43384882{12}[source]
They absolutely are. The reason many features are stupid and janky is because the language and its ecosystem has had almost 40 more years to collect cruft.

The fundamental problem with modules is that build systems for C++ have different abstractions and boundaries. C++ modules are like Rust async - something that just doesn't fit well with the language/system and got hammered in anyway.

The reason it seems like they come from nowhere is probably because you don't know where they come from. Most things go through boost, folly, absl, clang, or GCC (or are vendor-specific features) before going to std.

That being said, it's not just C++ that has this flow for adding features to the language. Almost every other major language that is not Rust has an authoritative specification.

replies(2): >>43384950 #>>43386095 #
126. dralley ◴[] No.43384950{13}[source]
What's a Rust feature that you think suffered from their process in a way that C++ would not have?
127. thrance ◴[] No.43384972{5}[source]
For now the caller has to ensure proper alignment of SMID lines. But in the future a safe API will be made available, once the kinks are ironed out. You can already use it in fact, by enabling a specific compiler feature [1].

[1] https://doc.rust-lang.org/std/simd/index.html

replies(1): >>43385024 #
128. beacon294 ◴[] No.43385005{4}[source]
They add mercaptan which is like 1000x the rotten egg smell of H2S.
replies(1): >>43387099 #
129. anonymoushn ◴[] No.43385024{6}[source]
there are no loads in the above unsafe block, in practice loadu is just as fast as load, and even if you manually use the aligned load or store, you get a crash. it's silly to say that crashes are unsafe.
replies(1): >>43385188 #
130. throwaway2037 ◴[] No.43385039[source]

    > Is the Rust compiler a better optimizing compiler than C compilers?
First, I assume that the main Rust compiler uses LLVM. I also assume (big leap here!) that the LLVM optimization process is language agnostic (ChatGPT agrees, whatever that is worth). As long as the language frontend can compiler to LLVM language-independent intermediate representation (IR), then all languages can equally benefit from the optimizer.
131. lambda ◴[] No.43385047{7}[source]
What Rust provides is a way to build safe abstractions over unsafe code.

Rust's type system (including ownership and borrowing, Sync/Send, etc), along with it's privacy features (allowing types to have private fields that can only be accessed by code in the module that defined them) allows you to create fully safe interfaces around code that uses unsafe; there is provably no combination of uses of the interface which lead to undefined behavior.

Now, yeah, it's possible to also use unsafe in Rust just for applying a local optimisation. And that has fewer benefits than a fully encapsulated safe interface, though is still easier to audit for potential UB than C.

So you're right that it's on a continuum, but the distinction between safe and unsafe code means you can more easily find the specific places where UB could occur, and the encapsulation and type system makes it possible to create safe abstractions over unsafe code.

132. chillingeffect ◴[] No.43385092{4}[source]
Eh. Good C programmers know what's safe and what's not. Often comments call out sketchy stuff. Just because it's not a language keyword, doesnt mean it's not called out.

Bad C programmers though? Their stuff is more dangerous and they don't know when and don't call it out and should probably stick to Rust.

replies(4): >>43385138 #>>43385346 #>>43385693 #>>43390514 #
133. pests ◴[] No.43385097{9}[source]
https://www.ioccc.org/years.html
134. rendaw ◴[] No.43385098[source]
While everything you say is true, your reply (and most of its siblings!) entirely misses GP's point.

All languages at some point interface with syscalls or low level assembly that can be done wrong, but one of Rust's selling points is a safe wrapping of low-level interactions. Like safe heap allocation/deallocation with `Box`, or swapping with `swap`, etc. Except... here.

Why does a library like zlib need to go beyond Rust's safe offerings? Why doesn't rust provide safe versions of the constructs zlib needs?

135. anonymoushn ◴[] No.43385115{8}[source]
generic simd abstractions are of quite limited use. I'm not sure what's objectionable about the thing Rust has shipped (in nightly) for this, which is more or less the same as the stuff Zig has shipped for this (in a pre-1.0 compiler version).
replies(1): >>43389051 #
136. sunshowers ◴[] No.43385138{5}[source]
No, it's been proven over and over that simply knowing invariants is not enough, in long-term projects built by large teams where team members change over time. Even the most experienced C developers are going to fail every so often. You need tooling that automates those invariants, and you need that tooling to fail closed.

I take a hard line on this stuff because we can either keep repeating the fundamental mistake of believing things like "willpower" to write correct code are real, or we can move on and adopt better tooling.

137. RossBencina ◴[] No.43385163{3}[source]
Hydrogen Sulfide is highly corrosive (big problem in sewers and associated infrastructure) I highly doubt you would choose to introduce it to gas pipelines on purpose.
138. jchw ◴[] No.43385188{7}[source]
Well, there's a category difference between a crash as in a panic and a crash as in a CPU exception. Usually, "safe" programming limits crashes to language-level error handling, which allows you to easily reason about the nature of crashes: if the type system is sound and your program doesn't use unsafe, the only way it should crash is by panic, and panics are recoverable and leave your program in a well-defined state. By the time you get to a signal handler, you're too late. Admittedly, there are some cases where this is less important than others... misaligned load/store wouldn't lead to a potential RCE, but if it can bring down a program it still is a potential DoS vector.

Of course, in practice, even in Rust, it isn't strictly true that programs without unsafe can't crash with fatal runtime errors. There's always stack overflows, which will crash you with a SIGABRT or equivalent operating system error.

replies(2): >>43387323 #>>43387638 #
139. rat87 ◴[] No.43385190{3}[source]
My understanding is that the user who writes an unsafe block in a safe function is responsible for making sure that it doesn't do anything wrong to mess up the safety and that the function isn't lying about exposing a safe interface. I think at one point before rust 1.0 there was even a suggestion to rename it trustme. Of course users can easily mess up but the point is to minimize the use of unsafe so its easier to check and create interfaces that can be used safely
140. RossBencina ◴[] No.43385204{7}[source]
> maintainers of those features don't have the discipline to see them through.

This take makes me sad. There are a lot of reasons why an open source contributor may not see something through. "Lack of discipline" is only one of them. Others that come to mind are: lack of time, lack of resources, lack of capability (i.e good at writing code, but struggles to navigate the social complexities of sheparding a significant code change), clinically impaired ability to "stay the course" and "see things through" (e.g. ADHD), or maybe it was a collaborative effort and some of the parties dropped out for any of the aforementioned reasons.

I don't have a solution, but it does kinda suck that open source contribution processes are so dependent on instigators being the responsible party to seeing a change all the way through the pipeline.

141. throwaway2037 ◴[] No.43385221{10}[source]
I assume you are hinting at 'int' is signed here? And, that signed overflow is UB in C? Real question: Ignoring what the ISO C language spec says, are there any modern hardware platforms (say: ARM64 and X86-64) that do not use two's complement to implement signed integers? I don't know any. As I understand, two's complement correctly supports overflow for signed arithmetic.

I might be old, but more than 10 years ago, hardly anyone talked about UB in C and C++ programming. In the last 10 years, it is all the rage, but seems to add very little to the conversation. For example, if you program C or C++ with the Win32 API, there are loads of weird UB-ish things that seem to work fine.

replies(3): >>43385280 #>>43385345 #>>43385566 #
142. no_wizard ◴[] No.43385259{4}[source]
I’m simply positing how do we know the safety guarantees hold, not a hypothetical extreme. Not really sure where the extreme comes in.

If you take Rust at face value, than this to me seems like an obvious question to ask

replies(1): >>43386052 #
143. no_wizard ◴[] No.43385278{12}[source]
Unless you clearly define how and when you’re going to handle removing a standard or updating it to reflect better use cases.

Language designers admittedly should worry about constant breakage but it’s fine to have some churn, and we shouldn’t be so concerned of it that it freezes everything

144. steveklabnik ◴[] No.43385280{11}[source]
> Ignoring what the ISO C language spec says, are there any modern hardware platforms (say: ARM64 and X86-64) that do not use two's complement to implement signed integers?

This is not how compilers work. Optimization happens based on language semantics, not on what platforms do.

145. throwaway2037 ◴[] No.43385295{8}[source]

    > absolutely laughable c-strings that perform terribly
Not much being said here in 2025. Any good project will quickly switch to a tiny structure that holds char* and strlen. There are plenty of open source libs to help you.
replies(1): >>43386634 #
146. throwaway2037 ◴[] No.43385326{6}[source]
Why does Vec need to have any unsafe code? If you respond "speed"... then I will scratch my chin.

    > For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.
I'm sure you already know this, but you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.
replies(1): >>43385620 #
147. throwaway2037 ◴[] No.43385330{6}[source]

    > However, for the autovectorizer to do a good job you have to write code in a very special way
Can you give an example of this "very special way"?
replies(1): >>43386642 #
148. jandrewrogers ◴[] No.43385345{11}[source]
At least in recent C++ standards, integers are defined as two’s complement. As a practical matter what hardware like that may still exist doesn’t have a modern C++ compiler, rendering it a moot point.

UB in C is often found where different real hardware architectures had incompatible behavior. Rather than biasing the language for or against different architectures they left it to the compiler to figure out how to optimize for the cases where instruction behavior diverge. This is still true on current architectures e.g. shift overflow behavior which is why shift overflow is UB.

149. fasterthanlime ◴[] No.43385346{5}[source]
True! Only, Good C programmers don’t exist.
150. throwaway2037 ◴[] No.43385347{4}[source]

    > What language is the JVM written in?
I am pretty sure it is C++.

I like your second paragraph. It is well written.

replies(1): >>43386157 #
151. jandrewrogers ◴[] No.43385419{6}[source]
The example here is trivially safe but more general SIMD safety is going to be extremely difficult to analyze for safety, possibly intractable.

For example, it is perfectly legal to dereference a vector pointer that references illegal memory if you mask the illegal addresses. This is a useful trick and common in e.g. idiomatic AVX-512 code. The mask registers are almost always computed at runtime so it would be effectively impossible to determine if a potentially illegal dereference is actually illegal at compile-time.

I suspect we’ll be hand-rolling unsafe SIMD for a long time. The different ISAs are too different, inconsistent, and weird. A compiler that could make this clean and safe is like fusion power, it has always been 10 years away my entire career.

replies(1): >>43385562 #
152. rat87 ◴[] No.43385422{4}[source]
I don't think what something was written in should count. Baring bugs it should still be memory safe. But I believe JVM has ffi and as soon as you use ffi you risk messing up that memory safety.
replies(1): >>43386030 #
153. pjmlp ◴[] No.43385521[source]
It goes both ways, many C folks call files full of inline Assembly and compiler specific extensions, C.
154. vlovich123 ◴[] No.43385562{7}[source]
Presumably a bounds check on the mask could be done or a safe variant exposed that does that trick under the hood. But yeah I don’t disagree that it’s “safe SIMD” is unlikely to scratch the itch for various applications but hopefully at least it’ll scratch a lot of them enough that the remaining unsafe is reduced.
replies(1): >>43385608 #
155. oneshtein ◴[] No.43385566{11}[source]
AI rewrote to avoid undefined behavior:

  int average(int x, int y) {
    long sum = (long)x + y;
    if(sum > INT_MAX || sum < INT_MIN)
        return -1; // or any value that indicates an error/overflow
  
    return (int)(sum / 2);
  }
replies(5): >>43386128 #>>43386231 #>>43386269 #>>43386613 #>>43396071 #
156. umanwizard ◴[] No.43385577[source]
> I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library.

This is such a widespread misunderstanding… one of the points of rust (there are many other advantages that have nothing to do with safety, but let’s ignore those for now) is that you can build safe interfaces, possibly on top of unsafe code. It’s not that all code is magically safe all the time.

157. fooker ◴[] No.43385608{8}[source]
No, a bounds check beats the purpose of simd in these cases
replies(1): >>43390317 #
158. NobodyNada ◴[] No.43385620{7}[source]
Rust doesn't have compiler-magic support for anything like a vector. The language has syntax for fixed-sized arrays on the stack, and it supports references to variable-length slices; but it has no magic for constructing variable-length slices (e.g. C++'s `new[]` operator). In fact, the compiler doesn't really "know" about the heap at all.

Instead, all that functionality is written as Rust code in the standard library, such as Vec. This is what I mean by using unsafe code to "teach" the borrow checker: the language itself doesn't have any notion of growable arrays, so you use unsafe to define its semantics and interface, and now the borrow checker understands growable arrays. The alternative would be to make growable arrays some kind of compiler magic, but that's both harder to implement correctly and not generalizable.

> you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.

That's true and that's a great design pattern in C as well. But there are some crucial differences:

- Rust has no undefined behavior outside of unsafe blocks. This means you only need to audit unsafe blocks (and any invariants they assume) to be sure your program is UB-free. C does not have this property even if you code defensively at interface boundaries.

- In Rust, most of the invariants can be checked at compile time; the need for runtime asserts is less than in C.

- C provides no way to defend against dangling pointers without additional tooling & runtime overhead. For instance, if I write a dynamic vector and get a pointer to the element, there's no way to prevent me from using that pointer after I've freed the vector, or appended an element causing the container to get reallocated elsewhere.

Rust isn't some kind of silver bullet where you feed it C-like code and out comes memory safety. It's also not some kind of high-overhead garbage collected language where you have to write unsafe whenever you care about performance. Rather, Rust's philosophy is to allow you to define fundamental operations out of small encapsulated unsafe building blocks, and its magic is in being able to prove that the composition of these operations is safe, given the soundness of the individual components.

The stdlib provides enough of these building blocks for almost everything you need to do. Unsafe code in library/systems code is rare and used to teach the language of new patterns or data structures that can't be expressed solely in terms of the types exposed by the stdlib. Unsafe in application-level code is virtually never necessary.

159. j-krieger ◴[] No.43385651[source]
This is not really true. You have to uphold those guarantees yourself. With unsafe preconditions, if you don't, the code will still crash loudly (which is better than undefined behaviour).
replies(1): >>43386098 #
160. j-krieger ◴[] No.43385656{3}[source]
> Isn't it the case that once you use unsafe even a single time, you lose all of Rust's nice guarantees

Inside that block, both yes and no. You have to enforce those nice guarantees yourself. Code that violates it will still crash.

161. branko_d ◴[] No.43385670{3}[source]
Hydrogen sulfide is highly toxic (it's comparable to carbon monoxide). I doubt anyone in their right mind would put it intentionally in a place where it could leak around humans.

But it can occur naturally in natural gas.

replies(2): >>43385731 #>>43386126 #
162. hyperbrainer ◴[] No.43385686{4}[source]
you are lucky to not have smelled metacarpan (which is what is actually put in). Much much worse than H2S
replies(1): >>43387992 #
163. hyperbrainer ◴[] No.43385693{5}[source]
And where can I find this mythical "Good C programmer"?
164. k1t ◴[] No.43385731{4}[source]
I assume GP was referring to mercaptan, or similar. i.e. Something with a distinctive bad smell.

https://en.m.wikipedia.org/wiki/Methanethiol

165. exDM69 ◴[] No.43385883{5}[source]
They are marked as unsafe because there are hundreds and hundreds of intrinsics, some of which do memory access, some have side effects and others are arithmetic only. Someone would have to individually review them and explicitly mark the safe ones.

There was a bug open about it and the rationale was that no one with the expertise (some of these are quite arcane) was stepping up to do it. (edit: other comments in this thread suggest that this effort is now underway and first changes were committed a few weeks ago)

You can do safe SIMD using std::simd but it is nightly only at this point.

166. PhilipRoman ◴[] No.43385885{3}[source]
Ironically using C without libc turns out to be easier (except for portability of course). The kernel ABI is much more sane than <stdio.h>. The only useful parts of libc are DNS resolution and text formatting, both of which it does rather poorly.
replies(1): >>43390124 #
167. sunshowers ◴[] No.43386030{5}[source]
Does it help to think of "safe Rust" as a language that's written in "unsafe Rust"? That's basically what it is.
168. TheDong ◴[] No.43386052{5}[source]
Sorry, it's just that I have an allergic reaction to what sounds like people trying to make debate-bro arguments.

Like, when I say "use signal, it's secure", someone could respond "Ahh, but technically you can't prove the absence of bugs, signal could have serious bugs, so it's not secure, you fool", but like everyone reading this already knew "it's secure" means "based on current evidence and my opinion it seems likely to be more secure than alternatives", and it got shortened. Interpreting things as absolutes that are true or false is pointless debate-bro junk which lets you create strawmen out of normal human speech.

When someone says "1+1 = 2", and a debate-bro responds "ahh but in base-2 it's 10 you fool", it's just useless internet noise. Sure, it's correct, but it's irrelevant, everyone already knows it, the original comment didn't mean otherwise.

Responding to "safe Rust should never cause out-of-bounds access, use-after-free" with "ahh but we can't prove the compiler is safe, so rust isn't safe is it??" is a similarly sorta response. Everyone already knows it. It's self-evident. It adds nothing. It sounds like debate-bro "I want to argue with you so I'm saying something that's true, but we both already know and doesn't actually matter".

I think that allergic response came out, apologies if it was misguided in this case and you're not being a debate-bro.

replies(2): >>43386124 #>>43389238 #
169. pjmlp ◴[] No.43386071[source]
It is also an idea that traces back to the 1960's system languages, that apparently was unknown at Bell Labs.
170. pjmlp ◴[] No.43386079{11}[source]
Unfortunely C++ on the last set of revisions has gotten that sequence wrong, many ideas are now PDF implemented before showing up in any compiler years later.

Fully-thought-out and feature-complete is something that since C++17 has been hardly happening.

171. pjmlp ◴[] No.43386095{13}[source]
Since C++17 that anything hardly goes "through boost, folly, absl, clang, or GCC (or are vendor-specific features) before going to std.".
172. littlestymaar ◴[] No.43386098{3}[source]
With unsafe you get exactly the same kind of semantics as C, if you don't uphold the invariant the unsafe functions expect, you end up with UB exactly like in C.

If you want a clean crash instead on indeterministic behavior, you need to use assert like in C, but it won't save you from compiler optimization removing checks that are deemed useless (again, exactly like in C).

replies(2): >>43386272 #>>43388759 #
173. johnisgood ◴[] No.43386111{4}[source]
Ada is even much more better at checking for correctness. It needs to be talked about more. "Safer than C" has been Ada, people did not know this before they jumped on the Rust bandwagon.
174. iknowstuff ◴[] No.43386119{4}[source]
https://github.com/embassy-rs/embassy
175. johnisgood ◴[] No.43386124{6}[source]
But you have to admit Rust zealots are misguided, too, who does not happen to know or realize the obviousness of what you just said with regarding to Rust.
replies(1): >>43386207 #
176. littlestymaar ◴[] No.43386126{4}[source]
> Hydrogen sulfide is highly toxic (it's comparable to carbon monoxide)

It's a bad comparison since CO doesn't smell, which is what makes it dangerous, while H2S is detected by our sense of smell at concentrations much lower than the toxic dose (in fact, its biggest dangers comes from the fact that at dangerous concentration it doesn't even smell anything due to our receptors being saturated).

It's not what's being put in natural gas, but it wouldn't be that dangerous if we did.

177. Jaxan ◴[] No.43386128{12}[source]
I’m not convinced that solution is much better. It can be improved to x/2 + y/2 (which still gives the wrong answer if both inputs are odd).
178. atoav ◴[] No.43386151[source]
There are certain optimizations you can only make with unsafe, because the borrow checker is smart, but not all-knowing. There have been countless discussions how unsafe isn't the ideal name. It should be more like in the meaning of trust the programmer that they checked this manually.

That being said, most rust programs don't ever need to use unsafe directly. If you go very low level or tune for prrformance it might become useful however.

Or if you're lazy and just want to stop the borrow checker from saving your ass.

179. pjmlp ◴[] No.43386156{4}[source]
Depends on which JVM you are talking about, some are 100% Java, some are a mix of Java and C, others are a mix of Java and C++, in all cases a bit of Assembly as well.
180. pjmlp ◴[] No.43386157{5}[source]
Depends on which JVM you are talking about, some are 100% Java, some are a mix of Java and C, others are a mix of Java and C++, in all cases a bit of Assembly as well.
replies(1): >>43386246 #
181. ◴[] No.43386165{4}[source]
182. ricardobeat ◴[] No.43386189[source]
Is this a sloppy codebase? I browsed through a few random files, and easily 90% of functions are marked unsafe.
183. TheDong ◴[] No.43386207{7}[source]
Such a rust zealot is a strawman, though please don't let me stop you from enjoying burning such a strawman.
replies(1): >>43386655 #
184. josefx ◴[] No.43386231{12}[source]
> long sum = (long)x + y;

There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.

https://www.gnu.org/software/libc/manual/html_node/Range-of-...

> return -1; // or any value that indicates an error/overflow

-1 is a perfectly valid average for various inputs. You could return the larger type to encode an error value that is not a valid output or just output the error and average in two distinct variables.

AI and C seem like a match made in hell.

replies(1): >>43389904 #
185. throwaway2037 ◴[] No.43386246{6}[source]
You are right. I should have been more clear. I am talking about the bog standard one that most people use from Oracle/OpenJDK. A long time back it was called "HotSpot JVM". That one has source code available on GitHub. It is mostly C++ with a little bit of C and assembly.
replies(1): >>43386336 #
186. sidkshatriya ◴[] No.43386256[source]
You can choose unsafe rust which has many more optimizations and is much faster than safe rust. Both are legitimate dialects of the language. Should you not feel confident with a library that is too “unsafe” you can use another crate. The rust ecosystem is quite big by now.

Personally I would still use unsafe safe rust than raw C which has more edge cases. Also when I’m not on the critical path I can always use safe rust.

187. throwaway2037 ◴[] No.43386269{12}[source]
I don't know why this answer was downvoted. It adds valuable information to this discussion. Yes, I know that someone already pointed out that sizeof(int) is not guaranteed on all platforms to be smaller than sizeof(long). Meh. Just change the type to long long, and it works well.
replies(4): >>43386284 #>>43386391 #>>43389387 #>>43396082 #
188. lenkite ◴[] No.43386272{4}[source]
> With unsafe you get exactly the same kind of semantics as C

People seem to disagree.

Unsafe Rust Is Harder Than C

https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/

https://news.ycombinator.com/item?id=41944121

replies(1): >>43390805 #
189. gf000 ◴[] No.43386284{13}[source]
It literally returns a valid output value as an error.
replies(1): >>43389527 #
190. pjmlp ◴[] No.43386336{7}[source]
Define mostly, https://github.com/openjdk/jdk

- Java 74.1%

- C++ 14.0%

- C 7.9%

- Assembly 2.7%

And those values have been increasing for Java with each OpenJDK release.

replies(1): >>43386648 #
191. rob74 ◴[] No.43386386{4}[source]
TIL also - until today, I thought it was just "mercaptan". Turns out there are actually two variants of that:

> Ethanethiol (EM), commonly known as ethyl mercaptan is used in liquefied petroleum gas (LPG) and resembles odor of leeks, onions, durian, or cooked cabbage

Methanethiol, commonly known as methyl mercaptan, is added to natural gas as an odorant, usually in mixtures containing methane. Its smell is reminiscent of rotten eggs or cabbage.

...but you can still call it "mercaptan" and be ~ correct in most cases.

192. fxtentacle ◴[] No.43386389[source]
Yeah, this article about a rust "win" perfectly illustrates why I distrust all good news about it.

Rust zlib is faster than zlib-ng, but the latter isn't a particularly fast C contender. Chrome ships a faster C zlib library which Rust could not beat.

Rust beat C by using pre-optimized code paths and then C function pointers inside unsafe. Plus C SIMD inside unsafe.

I'd summarize the article as: generous chunks of C embedded into unsafe blocks help Rust to be almost as fast as Chrome's C Zlib.

Yay! Rust sure showed it's superiority here!!!!1!1111

replies(1): >>43386894 #
193. josefx ◴[] No.43386391{13}[source]
> Meh. Just change the type to long long, and it works well.

C libraries tend to support a lot of exotic platforms. zlib for example supports Unicos, where int, long int and long long int are all 64 bits large.

194. immibis ◴[] No.43386569[source]
Clearly marking unsafe code is no good for safety, if you have many marked areas.

Some codebases, you can grep for "unsafe", find no results, and conclude the codebase is safe... if you trust its dependencies.

This is not one of those codebases. This one uses unsafe liberally, which tells you it's about as safe as C.

"unsafe behaviour is clearly marked" seems to be a thought-stopping cliche in the Rust world. What's the point of marking them, if you still have them? If every pointer dereference in C code had to be marked unsafe (or "please" like in Intercal), that wouldn't make C any better.

195. immibis ◴[] No.43386613{12}[source]
We're about to see a huge uptick in bugs worldwide, aren't we?
196. saagarjha ◴[] No.43386634{9}[source]
I take that you consider most major projects written in C to not be "good"?
replies(1): >>43389500 #
197. saagarjha ◴[] No.43386642{7}[source]
For example many autovectorizers get upset if you put control flow in your loop
198. saagarjha ◴[] No.43386648{8}[source]
JDK≠JVM
replies(1): >>43386750 #
199. johnisgood ◴[] No.43386655{8}[source]
How is it a strawman? Many people have misconceptions with regarding to Rust, while not even knowing about the existence of Ada/SPARK to begin with. They blindly spout "Rust is saFeEe!44!". If you are not a zealot, then it is not applied to you.
replies(1): >>43386969 #
200. gigatexal ◴[] No.43386705{3}[source]
Someone mentioned to me that for something as simple as a Linked list you have to use unsafe in rust

Update its how the std lib does it: https://doc.rust-lang.org/src/alloc/collections/linked_list....

replies(5): >>43386891 #>>43387304 #>>43390238 #>>43391048 #>>43392633 #
201. pjmlp ◴[] No.43386750{9}[source]
If you are only talking about libjvm.so you would be right, then again that alone won't do much help for Java developers.
replies(1): >>43421298 #
202. rcxdude ◴[] No.43386771{3}[source]
C's safe subset is so small as to be basically useless, and especially it's impossible to encapsulate behavior into a safe interface, in fact it's fairly easy in C to make an interface which is impossible to use correctly (gets() and the like).
203. GTP ◴[] No.43386840{7}[source]
Which is just a convoluted way of saying that it is possible to write bugs in any language. Still, it's undeniable that some languages make a better job at helping you avoid certain bugs than others.
204. umanwizard ◴[] No.43386891{4}[source]
No you don’t. You can use the standard linked list that is already included in the standard library.

Coming up with these niche examples of things you need unsafe for in order to discredit rust’s safety guarantees is just not interesting. What fraction of programmer time is spent writing custom linked lists? Surely way less than 1%. In most of the other 99%, Rust is very helpful.

replies(1): >>43388348 #
205. FreshOldMage ◴[] No.43386894[source]
Did you even read the article? They compare specifically against the Chrome zlib library and beat it at 10 out of 13 chunk sizes considered.
206. umanwizard ◴[] No.43386969{9}[source]
I see about 1000x more anti-rust-zealot strawman arguments than rust zealots on this site. Can you give some examples of the misguided rust zealotry you’re talking about?
replies(2): >>43387069 #>>43387110 #
207. torginus ◴[] No.43387043[source]
I wonder why writing SIMD in high-level languages hasn't been figured out yet for CPUs (it has been the norm for GPUs for since forever). Auto-vectorization universally sucks, so do OpenMP directives.

There was Ispc, which was a separate C-like programming language just for SIMD, but I don't understand why can't regular compilers generated high-quality vectorized code.

replies(2): >>43388044 #>>43388570 #
208. ◴[] No.43387069{10}[source]
209. taejo ◴[] No.43387099{5}[source]
Mercaptan is a group of compounds, more than one of which are used as gas odorants, so in some places, gas smells of rotten eggs, similar to H2S, while in others gas doesn't smell like that at all, but a quite distinct smell that's reminiscent garlic and durian.
210. johnisgood ◴[] No.43387110{10}[source]
I deleted my initial response, but FWIW you do not have to go far, take a look at the title of this submission.
211. ohmygoodniche ◴[] No.43387304{4}[source]
I love how the most common negative thing I hear about rust is how a really uncommon data structure no one should write by hand and should almost always import can be written using the unsafe rust language feature. Meanwhile rust application s tend to in most cases be considerably faster, more correct and more enjoyable to maintain than other languages. Must be a really awesome technology.
212. gpderetta ◴[] No.43387323{8}[source]
As you point out later, a SIGBRT or a SIGBUS would both be perfectly safe and really no different than a panic. With enough infra you could convert them to panic anyway (but probably not worth the effort).
replies(1): >>43388398 #
213. andrewchambers ◴[] No.43387402{3}[source]
It's more like letting a wet dog who you are watching closely quickly pass from your front door to the shower.
214. DannyBee ◴[] No.43387548{4}[source]
Hard disagree - if you violate the invariants in Rust unsafe code, you can cause global problems with local code. You can cause use-after-free, and other borrow checker violations, with incorrect unsafe code. Nothing will flag it, you will have no idea which unsafe code block is causing the isue, debugging will be hard.

I have no idea what your definition of encapsulation is, but mine is not this.

It's really only encapsulated in the sense that if you have a finite and small set of unsafe blocks, you can audit them easier and be pretty sure that your memory safety bugs are in there. This reality really doesn't exist much anymore because of how much unsafe is often ued, and since you you have to audit all of them, whether they come from a library or not, it's not as useful to claim encapsulation as one thinks.

I do agree in theory that unsafe encapsulation was supposed to be a thing, but i think it's crazy at this point to not admit that unsafe blocks turned out to easily have much more global effects than people expected, in many more cases, and are used more readily than expected.

Saying "scaling reasoning" also implies someone reasoned about it, or can reason about it.

But the practical problem is the same in both cases - someone got the reasoning wrong and nothing flagged it.

Wanna go search github for how many super popular libraries using unsafe had global correctness issues due to local unsafe blocks that a human reasoned incorrectly about, but something like miri found? Most of that unsafety that turned out to be buggy also was done for (unnecessary) performance reasons.

What you are saying is just something people tell themselves to make them feel okay about using unsafe all over the place.

If you want global correctness, something has to verify it, ideally not-human.

In the end, the thing C lacks is tools like miri that can be used practically with low false-positives, not "encapsulation" of unsafe code, which is trivially easy to perform in C.

Let's not kid ourselves here and end up building an ecosystem that is just as bad as the C one, but our egos refuse to allow us to admit it. We should instead admit our problems and try to improve.

Unsafe also has legitimate use cases in rust, for sure - but most unsafe code i look at does not need to exist, and is not better than unsafe C.

I'll give you an example: There are entire popular embedded bluetooth stacks in rust using unsafe global mutable variables and raw pointers and ..., across threads, for everything.

This is not better than the C equivalent - in fact it's worse, because users think it is safe and it's very not.

At least nobody thinks the C version is safe. It will often therefore be shoved in a binary that is highly sandboxed/restricted/etc.

It would be one thing if this was in the process of being ported/translated from C. But it's not.

Using intrinsics that require alignment and the API was still being worked on - probably a reasonable use of unsafe (though still easy to cause global problems like buffer overflows if you screwed up the alignment)

The bluetooth example - unreasonable.

replies(2): >>43389237 #>>43391195 #
215. thrance ◴[] No.43387638{8}[source]
Also, AFAIK panics are not always recoverable in Rust. You can compile your project with `panic = "abort"`, in which case the program will quit immediately whenever a panic is encountered.
replies(1): >>43388463 #
216. tmtvl ◴[] No.43387667{4}[source]
Is there such a boundary? How do you know a function doesn't call unsafe code without looking at every function called in it, and every function those functions call, and so on?

The usual retort to these questions is 'well, the standard library uses unsafe code, so everything would need a disclaimer that it uses unsafe code, so that's a useless remark to make', but the basic issue still remains that the only clear boundary is whether a function 'contains' unsafe code, not whether a function 'calls' unsafe code.

If Rust did not have a mechanism to use external code then it would be fine because the only sources of unsafe code would be either the application itself or the standard library so you could just grep for 'unsafe' to find the boundaries.

replies(3): >>43389854 #>>43390196 #>>43396112 #
217. throwaway150 ◴[] No.43387992{5}[source]
I have. It's worse no doubt. But it's not the smell of rotten eggs. My comment was meant to be tongue-in-cheek to correct the mistake of saying "H2S" in the GP comment.
replies(1): >>43390029 #
218. YoshiRulz ◴[] No.43388044[source]
.NET (C#) is getting there with Vector<T>.
replies(1): >>43392279 #
219. datadeft ◴[] No.43388181{3}[source]
True, however I only saw this happens to achieve max perf. I have very limited experience so this is confirmation bias from my end.
replies(1): >>43389075 #
220. vikramkr ◴[] No.43388348{5}[source]
I think the point is that it's funny that the standard library has to use unsafe to implement a data structure that's like the second data structure you learn in an intro to CS class
replies(3): >>43388447 #>>43388583 #>>43389181 #
221. jchw ◴[] No.43388398{9}[source]
Well, that's the thing though: in terms of Rust and Go and other safe programming languages, CPU exceptions are not "safe" even though they are not inherently dangerous. The point is that the subset of the language that is safe can't generate them, period. They are not accounted for in safe code.

There are uses for this, especially since some code will run in environments where you can not simply handle it, but it's also just cleaner this way; you don't have to worry about the different behaviors between operating systems and possibly CPU architectures with regards to error recovery if you simply don't generate any.

Since there are these edge cases where it wouldn't be possible to handle faults easily (e.g. some kernel code) it needs to be considered unsafe in general.

replies(1): >>43393003 #
222. Sharlin ◴[] No.43388447{6}[source]
Yeah, but Rust just proves the point here that (doubly) linked lists

a) are surprisingly nontrivial to get right,

b) have almost no practical uses, and

c) are only taught because they're conceptually nice and demonstrate pointers and O(1) vs O(n) tradeoffs.

Note that safe Rust has no problems with singly-linked lists or in general any directed tree structure.

223. jchw ◴[] No.43388463{9}[source]
Sure, but that is beside the point: if you compile code like that, you're intentionally making panics unrecoverable. The nature of panics from the language perspective is not any different; you're still in a well-defined state when it happens.

It's also possible to go a step further and practice "panic-free" Rust where you write code in such a way that it never links to the panic handler. Seems pretty hard to do, but seems like it might be worth it sometimes, especially if you're in an environment where you don't have anything sensible to do on a panic.

224. Sharlin ◴[] No.43388529[source]
To be fair, there's a safe portable SIMD abstraction brewing in `std::simd` but it's not stable yet. SIMD is just a terrible mess of platform differences in general and making a SIMD-using program safe means ensuring the availability of every single intrinsic used, lest the program is unsound. Of course that's not what C or C++ programs typically do, but in that world unsoundness is the norm anyway.
225. queuebert ◴[] No.43388570[source]
Why do you say that? I would say SIMD is pretty well figured out in well-written code, e.g. small, tight loops over vectors. Unrolling and vectorizing a loop is not that hard and happens constantly on all our phones for signal processing, for example.
226. umanwizard ◴[] No.43388583{6}[source]
Why is it particularly funny?

C has to make a syscall to the kernel which ultimately results in a BIOS interrupt to implement printf, which you need for the hello world program on page 1 of K&R.

Does that mean that C has no abstraction advantage over directly coding interrupts with asm? Of course not.

replies(1): >>43389729 #
227. j-krieger ◴[] No.43388759{4}[source]
> With unsafe you get exactly the same kind of semantics as C, if you don't uphold the invariant the unsafe functions expect, you end up with UB exactly like in C.

This is not exactly true. Even in production code, unsafe preconditions check if you violate these rules.

Here: https://doc.rust-lang.org/core/macro.assert_unsafe_precondit... And here: https://google.github.io/comprehensive-rust/unsafe-rust/unsa...

replies(1): >>43390437 #
228. cmrdporcupine ◴[] No.43389051{9}[source]
The issue is that it's sitting in nightly for years. Many many many years.

I don't write software targetting nightly, for good reason.

229. steveklabnik ◴[] No.43389075{4}[source]
An example of unsafe not for performance is when interacting with hardware directly.
replies(2): >>43394346 #>>43394354 #
230. tux3 ◴[] No.43389181{6}[source]
No, that's how the feature is supposed to work.

You design an abstraction which is unsafe inside, and exposes a safe API to users. That is really how unsafe it meant to be used.

Of course the standard library uses unsafe. This is where you want unsafe to be, not in random user code. That's what it was made for.

231. burntsushi ◴[] No.43389237{5}[source]
The encapsulation referred to here is that you can expose a safe API that is impossible to misuse in a way that leads to undefined behavior. That's the succinct way of putting it anyway.

The `memchr` crate, for example, has an entirely safe API. Nobody needs to use `unsafe` to use any part of it. But its internals have `unsafe` littered everywhere. Could the crate have bugs that result in UB due to a particular use of the `memchr` API? Yes! Doesn't that violate encapsulation? No! A bug inside an encapsulated boundary does not violate the very idea of encapsulation itself.

Encapsulation is about blame. It means that if `memchr` exposes a safe API, and if you use `memchr` and you get UB as a result of some `unsafe` code inside of `memchr`, then that means the problem is inside of `memchr`. The problem is definitively not with the caller using the library. That is, they aren't "holding it wrong."

I'm surprised that someone with as much experience as you is missing this nuance. How many times have you run into a C library API that has UB, you report the bug and the maintainer says, "sorry bro, but you're holding that shit wrong, your fault." In Rust, the only way that ought (very specifically using ought and not is) to be true is if the API is tagged with `unsafe`.

Now, there are all sorts of caveats that don't change the overall point. "totally safe transmute" being an obvious demonstration of one of them[1] by fiddling with `/proc/self/mem`. And of course, Rust does have soundness bugs. But neither of these things change the fundamental idea of encapsulation.

And yes, one obvious shortcoming of this approach is that... well... people don't have to follow it! People can lie! I can expose a safe API, you can get UB and I can reject blame and say, "well you're holding it wrong." And thus, we're mostly back into how languages like C deal with these sorts of things. And that is indeed a bummer. And there are for sure examples of that in the ecosystem. But the glaring thing you've left out of your analysis is all of the crates that don't lie and specifically set out to provide a sound API.

The great thing about progress is that we don't have to perfect. I'm really disappointed that you seem to be missing the forest for the trees here.

[1]: https://github.com/ben0x539/totally-safe-transmute/blob/main...

replies(1): >>43389748 #
232. no_wizard ◴[] No.43389238{6}[source]
I don't think we can go beyond the 'human limitations' if you will, of any software.

Bugs happen, they're bound to. Its more, what is enforcing the Rust language guarantees and how do we know its enforcing them with reasonably high accuracy one can ascertain?

I feel that it can only happen as Rust itself becomes (or perhaps it meaningfully already is) written in pure 100% safe Rust itself. At which point, I believe the matter will be largely settled.

Until then, I don't think its unreasonable for someone to ask about how it verifies its assertions is all.

replies(2): >>43389787 #>>43396137 #
233. NobodyNada ◴[] No.43389387{13}[source]
Copypasting a comment into an LLM, and then copypasting its response back is not a useful contribution to a discussion, especially without even checking to be sure it got the answer right. If I wanted to know what an LLM had to say, I can go ask it myself; I'm on HN because I want to know what people have to say.
replies(1): >>43389546 #
234. sophacles ◴[] No.43389500{10}[source]
Most major software projects are not good, no matter what language.
235. oneshtein ◴[] No.43389527{14}[source]
An error value is valid output in both cases.
replies(1): >>43393545 #
236. ◴[] No.43389546{14}[source]
237. cesarb ◴[] No.43389729{7}[source]
> C has to make a syscall to the kernel which ultimately results in a BIOS interrupt to implement printf,

That's not the case since the late 1990s. Other than during early boot, nobody calls into the BIOS to output text, and even then "BIOS interrupt" is not something normally used anymore (EFI uses direct function calls through a function table instead of going through software interrupts).

What really happens in the kernel nowadays is direct memory access and direct manipulation of I/O ports and memory mapped registers. That is, all modern operating systems directly manipulate the hardware for text and graphics output, instead of going through the BIOS.

replies(1): >>43389918 #
238. DannyBee ◴[] No.43389748{6}[source]
"The encapsulation referred to here is that you can expose a safe API that is impossible to misuse in a way that leads to undefined behavior. That's the succinct way of putting it anyway."

Well, no, actually. At least, not in an (IMHO) useful way.

I can break your safe API by getting the constraints wrong on unsafe code inside that API.

Also, unsafe usage elsewhere is not local. I can break your impossible to misuse API through an unsafe API that someone else used elsewhere, completely outside my control, and then wrapped in a safe API. Some of these are of course, bugs in rust/compiler, etc. I'm just offering i've yet to hear the view taken that the ability to do this is always a bug in the language/compiler, and will be destroyed on sight.

Beyond that:

To the degree this is useful encapsulation for tracking things down, it is only useful when the amount is small and you can reason about it.

This is simply no longer true in any reasonably sized rust app.

As a result, as you say, it is then only useful for saying who is at fault in the sense of whether i'm holding it wrong. To me, that is basically worthless at scale.

"I'm surprised that someone with as much experience as you is missing this nuance."

I don't miss it - I just don't think it's as useful as claimed.

This level of "encapsulation", which provides no real guarantee except "the set of bugs is caused somewhere by the set of unsafe blocks" is fairly unhelpful at large scale.

I have audited hundreds of thousands of lines of rust code to find bugs caused by unsafe usage. The thing that made it at all tractable was not this form of encapsulation - it was in fact, 100% worthless in doing that at scale because it was till tons and tons and tons of code to try to reason about, across lots of libraries and dependencies. As you say, it only helps provide blame once found, and blame is not that useful at scale. It does not make the code safer. It does not make it easier to track down. It only declares, that after i've spent all the time, that it is not my fault. But also nobody has to do anything anyway.

For small programs, this buys you something, as i said, as long as the set of unsafe blocks is small enough to be tractable to audit, cool. You can find bugs easier. In that sense, the tons of hobby programs, small libraries, etc, are a lot less likely to have bugs when written in rust (modulo their dependencies on unsafe code).

But like, your position seems to be that it is fairly useful that i can go to a library and tell them "your crap is broken", and be right about it. To me, this does not buy a lot in the kinds of large complex systems rust hopes to replace in C/C++. (it also might be false)

In actually tracking down the bug, which is what i care about, the thing that was useful is that i could run miri and lots of other things on it and get useful results that pointed me towards the most likely causes of issues..

So don't get me wrong - this is overall better than C, but writing lots of rust (i haven't written C/C++ at all in a while, actually) I still tire of the constant claims of the amount of rust safety. You are the rare rust person who understand the nuance and is willing to admit there is any flaw or non-perfection whatsoever.

A you say, there are lots of things that ought to be true in rust that are not. You have a good understanding of this nuance, and where it fails.

But it is you, i believe, who is missing the forest for the trees, because most do not have this.

I'll be concrete and i guess controversial in a way you are 100% free to disagree with, but might as well throw a stake in the ground - it's hacker news, might as well have fun making a comment someone can beat me over the head with later: If nothing changes, and the rust ecosystem grows by a factor of 100x while changing nothing about how it behaves WRT to unsafe usage, and no tooling gets significantly better, Rust will not end up better than C in practice. I don't mean - it will not have less bugs/vulnerabilities - i think it would by far!

But whether you have 100 billion of them, or 1 billion of them, and thus made a 100x improvement, i don't think matters too much when it's still a billion :)

Meanwhile, if the rust ecosystem got worse about unsafe, but made tools like Miri 50x faster (and made more tools like it that help verification in practice), it will not end up better than C.

To me - it is the tooling, and not this sort of encapsulation, that will make a practical difference or not at scale.

The idea that you will convince people not to write broken unsafe code, in ways that breaks safe APIs, or that the ability to assign blame matters, is very strange to me, and is no better than C. As systems grow, the likelihood of totally safe transmutes growing in them is basically 100% :)

FWIW - I also agree you don't have to be perfect, nor do I fault rust for not being perfect. Instead, i simply disagree that at scale, this sort of ability to place blame is useful. To me, it's the ability to find the bugs quickly and as automated as possible that is useful.

I need to find the totally safe transmutes causing issues in my system, not hand it to someone else after determining it couldn't be my fault.

replies(2): >>43390293 #>>43391330 #
239. steveklabnik ◴[] No.43389787{7}[source]
There is no possible way for something to be written in 100% memory safe code, no matter what the language, if you include "no unsafe code anywhere in the call stack." Interacting with the hardware is not memory safe. Any useful program must on some level involve unsafety. This is true for every programming language.
replies(1): >>43392377 #
240. steveklabnik ◴[] No.43389854{5}[source]
> How do you know a function doesn't call unsafe code without looking at every function called in it, and every function those functions call, and so on?

The point is that you don't need to. The guarantees compose.

> The usual retort to these questions is 'well, the standard library uses unsafe code

It's not about the standard library, it's much more fundamental than that: hardware is not memory safe to access.

> If Rust did not have a mechanism to use external code then it would be fine

This is what GC'd languages with runtimes do. And even they almost always include FFI, which lets you call into arbitrary code via the C ABI, allowing for unsafe things. Rust is a language intended to be used at the bottom of the stack, and so has more first-class support, calling it "unsafe" instead of FFI.

241. cesarb ◴[] No.43389904{13}[source]
> There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.

That used to be the case for 32-bit platforms, but most 64-bit platforms in which GNU libc runs use the LP64 model, which has 32-bit int and 64-bit long. That documentation seems to be a bit outdated.

(One notable 64-bit platform which uses 32-bit for both int and long is Microsoft Windows, but that's not one of the target platforms for GNU libc.)

242. umanwizard ◴[] No.43389918{8}[source]
Thanks for the information (I mean that genuinely, not sarcastically — I do really find it interesting). But it doesn’t really impact my point.
243. hyperbrainer ◴[] No.43390029{6}[source]
If that is the case (and I have no reason to believe otherwise), I apologise. Should work on detecting tone better.
244. atiedebee ◴[] No.43390124{4}[source]
By text formatting, do you mean printf and the like? It is pretty powerful in my experience.

Also, DNS resolution isn't part of the C standard, it's a POSIX interface I think.

245. cesarb ◴[] No.43390196{5}[source]
> Is there such a boundary? How do you know a function doesn't call unsafe code without looking at every function called in it, and every function those functions call, and so on?

Yes, there is a boundary, and usually it's either the function itself, or all methods of an object. For instance, a function I wrote recently goes somewhat like this:

  fn read_unaligned_u64_from_byte_slice(src: &[u8]) -> u64 {
    assert_eq!(src.len(), size_of::<u64>());
    unsafe { std::ptr::read_unaligned(src.as_ptr().cast::<u64>()) }
  }
The read_unaligned function (https://doc.rust-lang.org/std/ptr/fn.read_unaligned.html) has two preconditions which have to be checked manually. When doing so, you'll notice that the "src" argument must have at least 8 bytes for these preconditions to be met; the "assert_eq!()" call before that unsafe block ensures that (it will safely panic unless the "src" slice has exactly 8 bytes). That is, my "read_unaligned_u64_from_byte_slice" function is safe, even though it calls unsafe code; the function is the boundary between safe and unsafe code. No callers of that function have to worry that it calls unsafe code in its implementation.
246. estebank ◴[] No.43390238{4}[source]
Note that that is a doubly linked list, because it is a "soup of ownership" data structure. A singly linked list has clear ownership so it can be modelled in safe Rust.

On modern aschitectures you shouldn't use either unless you have an extremely niche use-case. They are not general use data structures anymore in a world where cache locality is a thing.

247. burntsushi ◴[] No.43390293{7}[source]
> I can break your safe API by getting the constraints wrong on unsafe code inside that API.

This doesn't make any sense at all as a broader point. Of course you can break the safe API by introducing a bug inside the implementation! I honestly just cannot figure out how you have a misunderstanding of this magnitude, and I'm forced to conclude that we are mis-communicating at some level.

I did read the rest of your comment, and the most significant point I can take away from it is that you're making a claim about scale. I think the dissonance introduced with comments like the one above makes it very hard for me to trust your experience here and the conclusions you've drawn from it. But I will note that whether Rust's safety story scales is from my perspective a different thing entirely from the factual claim that Rust enables safe encapsulation of `unsafe` usage.

You may say that just because Rust enables safe encapsulation doesn't mean programmers using Rust actually follow through with that in practice. And yes, absolutely, it doesn't. You can't derive an is from an ought. But in my experience, it totally does. I do work on lots of "hobby" stuff in Rust (although I try to treat it professionally, I just mean that I am not directly paid for it beyond donations), but I am also paid to write Rust too. I do not have your experience with Rust at scale, so I cannot refute it. But you've said enough questionable things here that I can't trust it either.

replies(1): >>43394829 #
248. vlovich123 ◴[] No.43390317{9}[source]
Not necessarily if you can hoist the bounds check outside of the loop somehow.
249. bangaladore ◴[] No.43390437{5}[source]
Quoted from your link

> Safe Rust: memory safe, no undefined behavior possible. Unsafe Rust: can trigger undefined behavior if preconditions are violated.

So Unsafe Rust from a UB perspective is no different than C/C++. If preconditions are violated, UB can occur, affecting anywhere in the program. Its unclear how the compiler could check anything about preconditions in a block explicitly used to say that the developer is the one upholding the preconditions.

replies(2): >>43392757 #>>43397883 #
250. 12_throw_away ◴[] No.43390514{5}[source]
Dunno why this is being downvoted, obviously no true Scotsman would ever use memory after freeing it.
251. estebank ◴[] No.43390622{4}[source]
Which also doesn't preclude someone else writing an abstraction on top that provides an API using references.
replies(1): >>43391026 #
252. kibwen ◴[] No.43390805{5}[source]
Using references in unsafe Rust is harder than using raw pointers in C.

Using raw pointers in unsafe Rust is easier than using raw pointers in C.

The solution is to not manipulate references in unsafe code. The problem is that in old versions of Rust this was tricky. Modern versions of Rust have addressed this by adding first-class facilities for producing pointers without needing temporary references: https://blog.rust-lang.org/2024/10/17/Rust-1.82.0.html#nativ...

253. steveklabnik ◴[] No.43391026{5}[source]
Absolutely, that's important too, thanks.
254. miki123211 ◴[] No.43391048{4}[source]
This is far less of a problem than it would be in a C-like language, though.

You can implement that linked list just once, audit the unsafe parts extensively, provide a fully safe API to clients, and then just use that safe API in many different places. You don't need thousands of project-specific linked list reimplementations.

255. sunshowers ◴[] No.43391195{5}[source]
> It's really only encapsulated in the sense that if you have a finite and small set of unsafe blocks, you can audit them easier and be pretty sure that your memory safety bugs are in there. This reality really doesn't exist much anymore because of how much unsafe is often ued, and since you you have to audit all of them, whether they come from a library or not, it's not as useful to claim encapsulation as one thinks.

Is it? I've written hundreds of thousands of lines of production Rust, and I've only sparingly used unsafe. It's more common in some domains than others, but the observed trend I've seen is for people to aggressively encapsulate unsafe code.

Unsafe Rust is quite difficult to write correctly. (The &mut provenance rules are a bit scary!) But once a safe abstraction has been built around it and the unsafe code has passed Miri, in practice I've seen people be able to not worry about it any more.

By the way I maintain cargo-nextest, and we've added support for Miri to make its runs many times faster [1]. So I'm doing my part here!

[1] https://nexte.st/docs/integrations/miri/

replies(1): >>43392697 #
256. sunshowers ◴[] No.43391330{7}[source]
Are you writing lots of FFI and/or embedded code? Those are the main places I see unsafe being used a lot.

The tooling and the encapsulation go hand in hand.

> The idea that you will convince people not to write broken unsafe code, in ways that breaks safe APIs, or that the ability to assign blame matters, is very strange to me, and is no better than C. As systems grow, the likelihood of totally safe transmutes growing in them is basically 100% :)

To be honest this doesn't track with my experience at all. Unsafe just isn't that commonly used in projects I contribute to. When it is, it is aggressively encapsulated.

replies(1): >>43394474 #
257. kazinator ◴[] No.43392018[source]
> clearly marked by the unsafe block.

Rust has macros; are macros prohibited from generating unsafe blocks, so that macro invocations don't have to be suspected of harboring unsafe code?

replies(1): >>43392644 #
258. uecker ◴[] No.43392234{8}[source]
There is definitely a distinction between safe and unsafe code in C, it is just not a simple binary distinction. But this does not make it impossible to screen C for unsafe constructions and it also does not mean that detecting unsafe issues in Rust is always trivial.
259. uecker ◴[] No.43392246{10}[source]
But this is also easy to protect against if you use the tools available to C programmers. It is part of the Rust hype that we would be completely helpless here, but this is far from the truth.
260. uecker ◴[] No.43392262{8}[source]
Rust is better at this yes, but the practical advantage is not necessarily that huge.
261. torginus ◴[] No.43392279{3}[source]
That's just syntactic sugar (and a bit of architecture independence) over intrinsics. You can get the same in C++ just with wrapping intrinsics in classes, and a few ifdefs.
262. no_wizard ◴[] No.43392377{8}[source]
I wasn't asking for 100%, I am asking for a reasonable proof of assertions.
replies(1): >>43392546 #
263. keybored ◴[] No.43392530[source]
> Kidding aside, I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library. At what point does it really stop mattering if this is C or Rust?

Kidding aside the 150-comment Unsafe Rust subthread was inevitable.

264. steveklabnik ◴[] No.43392546{9}[source]
You may like my next blog post.
replies(1): >>43392710 #
265. all2well ◴[] No.43392633{4}[source]
Doesn’t Arc and Weak work for doubly linked lists? Rust docs recommend Weak as a way to break pointer cycles: https://doc.rust-lang.org/std/sync/struct.Arc.html#breaking-...
266. steveklabnik ◴[] No.43392644{3}[source]
No. Just like function bodies can contain unsafe blocks.
267. burntsushi ◴[] No.43392697{6}[source]
> and we've added support for Miri to make its runs many times faster

Whoa. This might be the kick in the ass I needed to give cargo-nextest a whirl in my projects. Miri being slow is the single biggest annoyance I have with it!

replies(1): >>43393506 #
268. no_wizard ◴[] No.43392710{10}[source]
I think whenever someone takes the time to walk their audience through the nuances of this question its a big win.

No different than how I asked of the Go community how it could produce binaries on any platform for all major platforms it supports (IE, you don't have to compile your Go code on Linux for it to work on Linux, only have to set a flag, with the exception If I recall correctly of CGO dependencies but thats a wild horse anyway)

replies(1): >>43401875 #
269. randomNumber7 ◴[] No.43392757{6}[source]
The rust compiler was written by chuck norris.
270. Filligree ◴[] No.43392828{3}[source]
> You need to add explicit bounds check or explicitly allocate in C though. It is not there if you do not add it yourself.

Yes — in C you can skip the bounds-checks and allocation, because you can convince yourself they aren't needed; the problem is you may be wrong, either immediately or after later refactoring.

In other memory-safe languages you don't risk the buffer overrun, but it's likely you'll get the bounds checks and allocation, and you have the overhead of GC.

Rust is close to alone in doing both.

replies(1): >>43408972 #
271. comex ◴[] No.43393003{10}[source]
That’s largely true, but there are some exceptions (pun not intended).

In Rust, the CPU exception resulting from a stack overflow is considered safe. The compiler uses stack probing to ensure that as long as there is at least one page of unmapped memory below the stack (guard page), the program will reliably fault on it rather than continuing to access memory further below. In most environments it is possible to set up a guard page, including Linux kernel code if CONFIG_VMAP_STACK is enabled. But there are other environments where it’s not, such as WebAssembly and some microcontrollers. In those environments, the backend would have to add explicit checks to function prologs to ensure enough stack is available. I say “would have to”, not “does”: I’ve heard that on at least the microcontrollers, there are no such checks and Rust is just unsound at the moment. Not sure about WebAssembly.

Meanwhile, Go uses CPU exceptions to handle nil dereferences.

replies(1): >>43393106 #
272. jchw ◴[] No.43393106{11}[source]
Yeah, I glossed over the Rust stack overflow case. I don't know why: Literally two parent comments up I did bother to mention it.

That said, I actually entirely forgot Go catches nil derefs in a segfault handler. I guess it's not a big deal since Go isn't really suitable for free-standing environments where avoiding CPU exceptions is sometimes more useful, so there's no particular reason why the runtime can't rely on it.

273. sunshowers ◴[] No.43393506{7}[source]
Would love to hear how it goes! Miri is generally single-threaded, but because nextest is process-per-test, each test gets a completely separate Miri context. A few projects have switched their Miri runs over to nextest and are seeing dramatic improvements in CI times, e.g. [1].

[1] https://bsky.app/profile/lukaswirth.bsky.social/post/3lkg2sl...

274. MaxBarraclough ◴[] No.43393545{15}[source]
The code is unarguably wrong.

average(INT_MAX,INTMAX) should return INT_MAX, but it will get that wrong and return -1.

average(0,-2) should not return a special error-code value, but this code will do just that, making -1 an ambiguous output value.

Even its comment is wrong. We can see from the signature of the function that there can be no value that indicates an error, as every possible value of int may be a legitimate output value.

It's possible to implement this function in a portable and standard way though, along the lines of [0].

[0] https://stackoverflow.com/a/61711253/ (Disclosure: this is my code.)

replies(1): >>43396843 #
275. ◴[] No.43394346{5}[source]
276. ycombinatrix ◴[] No.43394354{5}[source]
or the operating system!

opening stdout with file handle 0 is not guaranteed safe by the compiler. there's an "unsafe" somewhere in there.

277. DannyBee ◴[] No.43394474{8}[source]
Yes - I spend about half my time with rust embedded, where unsafe code is just everywhere, whether needed or not.

There is still plenty in my non-embedded stuff, but a fair amount hardware-adjacent (IE i have to drive things like relay cards, just from a desktop machine). to be fair.

But i've found plenty of broken unafe in things like, uh, constraint solvers.

I would agree that useful and successful rust projects aggressively encapsulate (and attempt to avoid) unsafe usage.

I will still maintain my belief that this will not be enough over time and scale.

278. DannyBee ◴[] No.43394829{8}[source]
This doesn't seem like we are getting anywhere on this part of the thread, unfortunately.

My suggestion would be - if we are ever in the same place, let's just grab coffee or something.

In the end - i suspect we are just going to find we have different enough experiences that our views of safe encapsulation and its usefulness are very different.

Let's put that aside for a second - I'll also take one more pass at the original place we started, and then give up:

To go back all the way to where we started, the comment i was originally replying to said "No, C lacks encapsulation of unsafe code. This is very important. Encapsulation is the only way to scale local reasoning into global correctness."

So we were in fact talking about scale and more particularly how to scale to global correctness, not really whether rust enables safe encapsulation, but whether encapsulation istelf enables local reasoning to scale to global correctness (In theory or in practice)

My view here, restated more succinctly, is "their claim that encapsulation is the only way to scale local reasoning to global correctness is emphatically wrong" (both in theory and practice).

My argument there remains simple: Tooling is what enables you to scale local reasoning to global correctness, not encapsulation.

Putting aside how useful or not it is otherwise for a second, encapsulation, by itself, does not enable you to reason your way from local results to global results soundly at all - for exactly the reason you mention in the first sentence here - bugs in local correctness reasoning can have global correctness effect. Garbage in, garbage out. Encapsulation does not wave a wand at this and make it go away[1]. There are lot of other reasons, this is just the one we went down a bit of a rabbit hole on :)

Instead, it is tooling that lets you scale. If you can have "catches 95+%" of local reasoning error (feel free to choose your own bar), you can almost certainly parlay that into high-percent global correctness, regardless of whether anything is encapsulated at all or not.

Now: If encapsulation enables an easier job of that tooling, and i believe it helps a lot, fwiw, then that's useful. But it's the tooling you want, not the encapsulation. Again, concretely: If I could not safely encapsulate anything, but had tooling that caught 100% of local reasoning issues, i would be much better off than having 100% safely encapsulated code, but no tooling to verify local or global reasoning. This is true (to me) even if you lower the "catches 100% of local reasoning issues" down significantly.

[1] FWIW, i also don't argue that this problem is particular to rust. It's not, of course. It exists everywhere. But i'm not the one claiming that rust will enable you to scale local reasoning to global correctness through encapsulation :P

replies(1): >>43399202 #
279. umanwizard ◴[] No.43396071{12}[source]
Please stop posting AI-generated content to HN. It’s clear the majority of users hate it, given that it gets swiftly downvoted every time it’s posted.
280. umanwizard ◴[] No.43396082{13}[source]
I always downvote all AI-generated content regardless of whether it’s right or wrong, because I would like to discourage people from posting it.
281. umanwizard ◴[] No.43396097{5}[source]
Rust doesn’t have classes, nor can const values be modified, even in unsafe code. (did you mean “immutable”?)
282. umanwizard ◴[] No.43396112{5}[source]
The point of rust isn’t to formally prove that there are no bugs. It’s just to make writing certain classes of bugs harder. That’s what people are missing when they point out that yes, it’s possible to circumvent safety mechanisms. It’s a strawman: bulletproof, guaranteed security simply isn’t a design goal of rust.
283. umanwizard ◴[] No.43396137{7}[source]
Yes, the rust compiler, like all complex software, has bugs. And yes, those bugs could result in memory unsafety, undefined behavior, etc.

The same is true of every programming language. There might be bugs in clang or gcc so how can we prove that they actually follow the C++ spec? We can’t. rustc is no different, but nobody ever claimed it was, so why hold it to a higher standard than clang?

284. MaxBarraclough ◴[] No.43396843{16}[source]
Too late for me to edit: as josefx pointed out, it also fails to properly address the undefined behavior. The sums INT_MAX + INT_MAX and INT_MIN + INT_MIN may still overflow despite being done using the long type.

That won't occur on an 'LP64' platform, [0] but we should aim for proper portability and conformance to the C language standard.

[0] https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_m...

285. j-krieger ◴[] No.43397883{6}[source]
> So Unsafe Rust from a UB perspective is no different than C/C++. If preconditions are violated, UB can occur

Only if you actively disable panics being triggered if unsafe preconditions are triggered. In most code, the program will crash instead. Enabling default panic on up violation in production code was done last year, IIRC.

> Its unclear how the compiler could check anything about preconditions

It can't. This is done at runtime, by default and without manually needed programmer interaction.

You can see an example of this in the `ptr`module, here: https://doc.rust-lang.org/beta/src/core/ptr/mod.rs.html#1071

Some are only enabled for `debug_assert` (which is enabled by default), see `ptr::read`, here: https://doc.rust-lang.org/beta/src/core/ptr/mod.rs.html#1370

replies(1): >>43402637 #
286. burntsushi ◴[] No.43399202{9}[source]
> To go back all the way to where we started, the comment i was originally replying to said "No, C lacks encapsulation of unsafe code. This is very important. Encapsulation is the only way to scale local reasoning into global correctness."

That's fair. I was focusing more on the factual aspect of "Rust enables encapsulating `unsafe`." But you're right, this statement is making a bigger claim than that, and it crosses over into something that is a (in theory) testable opinion.

I do agree with it though. But I recognize that it is a different claim than the one I was putting forward as factual.

I think for this, I would say that my experience with Rust has demonstrated that encapsulation is working at some non-trivial scale. The extent to which it will continue to scale depends, in part, on whether people writing Rust prioritize soundness. In my bubble, this prioritization is extremely high. But will what is arguably a cultural norm extend out to all Rust programmers everywhere?

I legitimately don't know. This is why I was one of the first (but not the first) people to make a stink about improper `unsafe` usage inside the Actix project some years ago. It was because I perceived the project as specifically flouting the cultural norm and rejecting soundness as a goal to strive for. I do indeed see this as an essential piece of what Rust brings to the table, and for it to succeed in its goals, we have to somehow figure out how to maintain the cultural norm that safe APIs cannot be used in a way that leads to UB.

I think where you and I differ is both in what we've seen (it sounds like you've seen evidence of this cultural norm eroding) and what we consider encapsulation busting. I'm not at all worried about bugs in `unsafe` code. Those are going to happen, and yes, they will lead to safe Rust having UB. But those are "just" bugs. The vastly more important thing to me is intent and where blame is assigned when UB happens. If blame starts shifting to the safe code, then that will indicate the erosion of that cultural norm.

As for tooling, I think it's vital to making sure safe encapsulations are correct, but I don't see it as having a significant impact on the norm.

Then again, these are the days in which even some of the strongest cultural norms we've had (in the United States anyway) have been eroding. So maybe building a system on top of one is folly.

replies(1): >>43414836 #
287. rc00 ◴[] No.43401875{11}[source]
Cross-compilation with Cgo can be resolved using something like Zig as the compilation toolchain:

https://zig.news/kristoff/building-sqlite-with-cgo-for-every...

288. bangaladore ◴[] No.43402637{7}[source]
These seem to be beta features. But in any case it seems like its just doing some number of asserts to validate some preconditions.

However, even at runtime it can't do anything to say if (excuse the C pseudocode) *(uint32_t*)0x1C00 = 0xFE is a valid memory operations. On some systems, in some cases it might be.

replies(1): >>43410104 #
289. johnisgood ◴[] No.43408972{4}[source]
Rust is not the only one, there is Ada as well. Ada without SPARK adds bounds checks (which can be disabled through a compiler option), but with SPARK, it does not have to be done at runtime, among many other things (contract-based programming (without SPARK), formal verification (with SPARK) where you need it, and so forth), everything as a breeze.

https://docs.adacore.com/spark2014-docs/html/ug/en/usage_sce...

Look at the table after this paragraph:

> SPARK builds on the strengths of Ada to provide even more guarantees statically rather than dynamically. As summarized in the following table, Ada provides strict syntax and strong typing at compile time plus dynamic checking of run-time errors and program contracts. SPARK allows such checking to be performed statically. In addition, it enforces the use of a safer language subset and detects data flow errors statically.

This is the documentation (namely SPARK User's Guide).

As for what SPARK is: https://learn.adacore.com/courses/intro-to-spark/chapters/01..., so you will be able to see (if you read further), that Ada alone may suffice for the majority of the cases, as for many things you do not even need SPARK to begin with.

Many courses for both Ada and SPARK are available here: https://learn.adacore.com/index.html

There are very good reasons for why Ada is used in critical systems, especially, but not limited to avionics and railway systems, see more at https://www.adacore.com/industries.

290. j-krieger ◴[] No.43410104{8}[source]
> These seem to be beta features

What? Where did you get that impression?

> But in any case it seems like its just doing some number of asserts to validate some preconditions

Yeah, like C code normally would, just in the STD in this case.

replies(1): >>43465052 #
291. DannyBee ◴[] No.43414836{10}[source]
"I do agree with it though. But I recognize that it is a different claim than the one I was putting forward as factual."

Maybe the core is that i don't understand why you agree with it :)

Maybe your definition of global correctness is different?

Maybe you are thinking of properties that are different than i am thinking of?

To me, for most (IMHO useful) definitions of global correctness, for most properties, the claim is provably false.

For me, local and global correctness that is useful at scale is not really "user-asserted correctness modulo implementation bugs".

Let's take a property like memory safety and talk about it locally and globally.

Let's just remove some nuance and say lots of these forms of encapsulation can be thought of as assertions of correctness wrt to memory safety (for this example, obviously, there are more things it asserts, and it's not always memory safe in various types of encapsulation) - i assert that you don't have to worry about this - i did, and i'm sure it's right :)

This assertion, once wrong in a local routine, makes a global claim that "this program is memory safe" now incorrect. Your local correctness did not scale to global correctness here, because your wrong local assertion led to a wrong global answer.

Tooling would not have done this.

Does it matter? maybe, maybe not! That's the province of creative security researchers and other folks.

My office mate at IBM was once tasked (eons ago) with computing the probability that a random memory bit flip would actually cause a program to misbehave.

Obviously, you can go too far, and end arguing about whether the cosmic rays affecting your program really violate your proof of correctness :)

But for a property like this, i don't want to rely on norms at scale. Because those norms generate mostly assertions of correctness. Once i've got tons and tons of assertions, and nobody has actually proved anything about them, that's a house of cards. Even if they are diligent and right 99% of the time, if you have 100000 of them, that's uh, 1000 of them that are wrong. and as discussed, it only takes one to break global correctness.

If you want all 100k to be correct with 90% probablity, you'd need people to be 99.9999% correct. That seems unlikely :)

I don't mean that i'm not willing to accept the norm is better - i am. I certainly would agree the average rust program is more bug free and more safe than C ones. But i've seen too much at scale to not want some mechanism of verifying that norm, or at least a large part of it.

As an aside, there are also, to me, properties that are useful modulo implementation bugs. But for me, these mostly fall into proving algorithmic correctness.

IE it's useful to prove that a lock-free algorithm always makes progress, assuming someone did not screw up the implementation. It's separately useful to be able to prove a given implementation is not screwed up, but often much harder.

As for norms - I have zero disagreement that rust has better norms overall, but yes, i've seen erosion. I would recommend, for example, trying to do some embedded rust programming if you want to see an area where no rust norms seem to exist under the covers.

Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".

It often feels like they just wanted to make the errors go away, so they put it in an unsafe block, and then didn't want to have to mark everything as unsafe to encapsulated it. I wish I was joking.

These libraries are often the de-facto way to achieve something (like bluetooth support). They are not getting better, they are getting copied and these pieces reused in chunks, causing the same elsewhere. and FWIW, none of these needed much if any unsafe at all (interacting with a bluetooth controller is not as unsafe as it seems. It is mostly just speaking to an embedded uart and issuing it some well-specified commands. So you probably need unsafe to deal with the send/receive, but not much else).

I can give you links and details privately, i don't really want to sort of publicly shame things for the sake of this argument :)

There are very well thought out and done embedded libraries mind you, but uh, they are the minority.

THis is not the only area, mind you, but it's an easy one to poke.

All norms fail over time, and you have to plan for it. You don't want to rely on them for things like "memory safety" :)

Good leadership, mentoring, etc makes them fail slower, but the thing that always causes failure is growth. Fast grow is even worse, but there are very few norms that scale and survive factors of 100x. THis is especially true when they are cultural norms.

I don't believe Rust will be the first to succeed at maintaining the level of norm it had 5-10 years ago, around this sort of thing, in the face of massive growth and scale.

(Though i have no doubt it can if it neither grows nor scales).

[1] How much global correctness is affected by local correctness depends on the property - there are some where some wrong local answers often change nothing because they are basically minimum(all local answers). There are some where a single wrong local answer makes it totally wrong because they are basically maximum(all local answers). The closer they are to simple union/intersection or min/max of local answers, the easier it is to compute global correctness, but the righter your local answers have to be :)

replies(1): >>43419475 #
292. burntsushi ◴[] No.43419475{11}[source]
> Maybe the core is that i don't understand why you agree with it :)

Because of encapsulation. I don't need to look far to see the effects of encapsulation (and abstraction) on computing.

I read your whole comment, but I really want to tighten this discussion up. I think the biggest thing I'm personally missing from coming over to your view of things is examples. In particular:

> Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".

Can you show me? If it's really "almost all," then you should even be able to point to a crate I've authored with a broken safe encapsulation. `regex-automata`, `jiff`, `bstr`, `byteorder`, `memchr` and `aho-corasick` all use `unsafe`. Can you find a point of unsoundness?

I don't want a library here or there. I am certain there are some libraries that are intentionally flouting Rust's norms here. So a couple of examples wouldn't be enough to convince me because I don't think a minority of people flouting Rust's norms is a big problem unless it can be shown that this minority is growing in size. What I want to see is evidence that this is both widespread and intentional. It's hard for me to believe that it is without me noticing.

If you want to do this privately, you can email: jamslam@gmail.com

293. saagarjha ◴[] No.43421298{10}[source]
That is what most people are talking about when they are discussing the JVM, yes
294. uecker ◴[] No.43445900{10}[source]
You can tell a C compiler to trap or wrap around on overflow, or you use checked arithmetic to test explicitly for overflow.
295. bangaladore ◴[] No.43465052{9}[source]
> What? Where did you get that impression?

https://doc.rust-lang.org/beta/

> Yeah, like C code normally would, just in the STD in this case.

Yes, in that manual checks are still needed. My point is unsafe code in rust is nowhere near safe and cannot be considered as safe without extensive analysis, no matter the language features used.