Most active commenters
  • throwaway2037(7)
  • uecker(7)
  • steveklabnik(5)
  • umanwizard(4)
  • saagarjha(4)
  • gf000(4)
  • pjmlp(4)
  • dwattttt(3)
  • NobodyNada(3)
  • (3)

←back to thread

Zlib-rs is faster than C

(trifectatech.org)
341 points dochtman | 101 comments | | HN request time: 1.61s | source | bottom
Show context
YZF ◴[] No.43381858[source]
I found out I already know Rust:

        unsafe {
            let x_tmp0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x10);
            xmm_crc0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x01);
            xmm_crc1 = _mm_xor_si128(xmm_crc1, x_tmp0);
            xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_crc0);
Kidding aside, I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library. At what point does it really stop mattering if this is C or Rust?

Presumably with inline assembly both languages can emit what is effectively the same machine code. Is the Rust compiler a better optimizing compiler than C compilers?

replies(30): >>43381895 #>>43381907 #>>43381922 #>>43381925 #>>43381928 #>>43381931 #>>43381934 #>>43381952 #>>43381971 #>>43381985 #>>43382004 #>>43382028 #>>43382110 #>>43382166 #>>43382503 #>>43382805 #>>43382836 #>>43383033 #>>43383096 #>>43383480 #>>43384867 #>>43385039 #>>43385521 #>>43385577 #>>43386151 #>>43386256 #>>43386389 #>>43387043 #>>43388529 #>>43392530 #
Aurornis ◴[] No.43381931[source]
Using unsafe blocks in Rust is confusing when you first see it. The idea is that you have to opt-out of compiler safety guarantees for specific sections of code, but they’re clearly marked by the unsafe block.

In good practice it’s used judiciously in a codebase where it makes sense. Those sections receive extra attention and analysis by the developers.

Of course you can find sloppy codebases where people reach for unsafe as a way to get around Rust instead of writing code the Rust way, but that’s not the intent.

You can also find die-hard Rust users who think unsafe should never be used and make a point to avoid libraries that use it, but that’s excessive.

replies(10): >>43381986 #>>43382095 #>>43382102 #>>43382323 #>>43385098 #>>43385651 #>>43386071 #>>43386189 #>>43386569 #>>43392018 #
1. chongli ◴[] No.43382102[source]
Isn't it the case that once you use unsafe even a single time, you lose all of Rust's nice guarantees? As far as I'm aware, inside the unsafe block you can do whatever you want which means all of the nice memory-safety properties of the language go away.

It's like letting a wet dog (who'd just been swimming in a nearby swamp) run loose inside your hermetically sealed cleanroom.

replies(16): >>43382176 #>>43382305 #>>43382448 #>>43382481 #>>43382485 #>>43382606 #>>43382685 #>>43382739 #>>43383207 #>>43383637 #>>43383811 #>>43384238 #>>43384281 #>>43385190 #>>43385656 #>>43387402 #
2. timschmidt ◴[] No.43382176[source]
It seems like you've got it backwards. Even unsafe rust is still more strict than C. Here's what the book has to say (https://doc.rust-lang.org/book/ch20-01-unsafe-rust.html)

"You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:

    Dereference a raw pointer
    Call an unsafe function or method
    Access or modify a mutable static variable
    Implement an unsafe trait
    Access fields of a union
It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You’ll still get some degree of safety inside of an unsafe block.

In addition, unsafe does not mean the code inside the block is necessarily dangerous or that it will definitely have memory safety problems: the intent is that as the programmer, you’ll ensure the code inside an unsafe block will access memory in a valid way.

People are fallible, and mistakes will happen, but by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."

replies(6): >>43382290 #>>43382353 #>>43382376 #>>43383159 #>>43383265 #>>43386165 #
3. pclmulqdq ◴[] No.43382290[source]
The way I have heard it described that I think is a bit more succinct is "unsafe admits undefined behavior as though it was safe."
4. CooCooCaCha ◴[] No.43382305[source]
I wouldn’t go that far. Bevy for example, uses unsafe internally but is VERY strict about it, and every use of unsafe requires a comment explaining why the code is safe.

In other words, unsafe works if you use it carefully and keep it contained.

replies(1): >>43382540 #
5. Someone ◴[] No.43382353[source]
But “Dereference a raw pointer”, in combination with the ability to create raw pointers pointing to arbitrary memory addresses (that, you can do even in safe rust) allows you to write arbitrary memory from unsafe rust.

So, in theory, unsafe rust opens the floodgates. In practice, though, you can use small fragments of unsafe code that programmers can fairly easily check to be safe.

Then, once you’ve convinced yourself that those fragments are safe, you can be assured that your whole program is safe (using ‘safe’ in the rust sense, of course)

So, there may be some small islands of unsafe code that require extra attention from the programmer, but that should be just a tiny fraction of all lines, and you should be able to verify those islands in isolation.

replies(1): >>43382404 #
6. uecker ◴[] No.43382376[source]
This description is still misleading. The preconditions for the correctness of an unsafe block can very much depend on the correctness of the code outside and it is easy to find Rust bugs where exactly this was the cause. This is very similar where often C out of bounds accesses are caused by some logic error elsewhere. Also an unsafe block has to maintain all the invariants the safe Rust part needs to maintain correctness.
replies(4): >>43382514 #>>43382566 #>>43382585 #>>43383088 #
7. steveklabnik ◴[] No.43382404{3}[source]
> allows you

This is where the rubber hits the road. Rust does not allow you to do this, in the sense that this is possibly undefined behavior. That "possibly" is why the compiler allows you to write this code, because by saying "unsafe", you are promising that this specific arbitrary address is legal for you to write to. But that doesn't mean that it's always legal to do so.

replies(1): >>43382457 #
8. SkiFire13 ◴[] No.43382448[source]
You lose the nice guarantees inside the `unsafe` block, but the point is to write a sound and safe interface over it, that is an API that cannot lead to UB no matter how other safe code calls it. This is basically the encapsulation concept, but for safety.

To continue the analogy of the dog, you let the dog get wet (=you use unsafe), but you put a cleaning room (=the sound and safe API) before your sealed room (=the safe code world)

9. timschmidt ◴[] No.43382457{4}[source]
The compiler won't allow you to compile such code without the unsafe. The unsafe is *you* promising the compiler that *you* have checked to ensure that the address will always be legal. So that the compiler will allow you to compile the code.
replies(1): >>43382475 #
10. steveklabnik ◴[] No.43382475{5}[source]
Right, I'm saying "allow" has two different connotations, and only one of them, the one that you're talking about, applies.
replies(1): >>43382596 #
11. timeon ◴[] No.43382481[source]
> unsafe even a single time, you lose all of Rust's nice guarantees

Not sure why would one resulted in all. One of Rust's advantages is the clear boundary between safe/unsafe.

replies(1): >>43387667 #
12. wongarsu ◴[] No.43382485[source]
If your unsafe code violates invariants it was supposed to uphold, that can wreck safety properties the compiler was trying to uphold elsewhere. If you can achieve something without unsafe you definitely should (safe, portable simd is available in rust nightly, but it isn't stable yet).

At the same time, unsafe doesn't just turn off all compiler checks, it just gives you tools to go around them, as well as tools that happen to go around them because of the way they work. Rust unsafe is this weird mix of being safer than pure C, but harder to grasp; with lots of nuanced invariants you have to uphold. If you want to ensure your code still has all the nice properties the compiler guarantees (which go way beyond memory safety) you would have to carefully examine every unsafe block. Which few people do, but you generally still end up with a better status quo than C/C++ where any code can in principle break properties other code was trying to uphold.

13. iknowstuff ◴[] No.43382514{3}[source]
No. Correctness of code outside unsafe depends on correctness inside those blocks, not the other way around
replies(1): >>43382600 #
14. tonyhart7 ◴[] No.43382540[source]
right, the point is raising awareness and assumption its not 100 and 0 problem
15. dwattttt ◴[] No.43382566{3}[source]
It's true, but I think it's only fair if you hold Rust to this analysis, other languages should too; the scrutiny you're implying you need in an unsafe Rust block needs to be applied to all C code, because all C code could depend on code anywhere else for its safety characteristics.

In practice (in both languages) you check what the actual unsafe code does (or "all" code in C's case), note code that depends on external actors for safety (it's not all C code, nor is it all unsafe Rust blocks), and check their callers (and callers callers, etc).

replies(1): >>43382684 #
16. lambda ◴[] No.43382585{3}[source]
So, it's true that unsafe code can depend on preconditions that need to be upheld by safe code.

But using ordinary module encapsulation and private fields, you can scope the code that needs to uphold those preconditions to a particular module.

So the "trusted computing base" for the unsafe code can still be scoped and limited, allowing you to reduce the amount of code you need to audit and be particularly careful about for upholding safety guarantees.

Basically, when writing unsafe code, the actual unsafe operations are scoped to only the unsafe blocks, and they have preconditions that you need to scope to a particular module boundary to ensure that there's a limited amount of code that needs to be audited to ensure it upholds all of the safety invariants.

Ralf Jung has written a number of good papers and blog posts on this topic.

replies(1): >>43382721 #
17. timschmidt ◴[] No.43382596{6}[source]
I gotcha. I misread and misunderstood. Yes, we agree.
18. sunshowers ◴[] No.43382606[source]
What language is the JVM written in?

All safe code in existence running on von Neumann architectures is built on a foundation of unsafe code. The goal of all memory-safe languages is to provide safe abstractions on top of an unsafe core.

replies(3): >>43385347 #>>43385422 #>>43386156 #
19. uecker ◴[] No.43382684{4}[source]
What is true is that there are more operations in C which can cause undefined behavior and those are more densely distributed over the C code, making it harder to screen for undefined behavior. This is true and Rust certainly has an advantage, but it not nearly as big of an advantage as the "Rust is safe" (please do not look at all the unsafe blocks we need to make it also fast!) and "all C is unsafe" story wants you to believe.
replies(4): >>43382883 #>>43383190 #>>43383793 #>>43385047 #
20. janice1999 ◴[] No.43382685[source]
Claiming unsafe invalidates "all of the nice memory-safety properties" is like saying having windows in your house does away with all the structural integrity of your walls.

There's even unsafe usage in the standard library and it's used a lot in embedded libraries.

replies(1): >>43383773 #
21. uecker ◴[] No.43382721{4}[source]
And you think one can not modularize C code and encapsulate critical buffer operations in much safer APIs? One can, the problem is that a lot of legacy C code was not written this way. Also lot of newly written C code is not written this way, but the reason is often that people cut corners when they need to get things done with limited time and resources. The same you will see with Rust.
replies(4): >>43383131 #>>43383951 #>>43384869 #>>43386840 #
22. vlovich123 ◴[] No.43382739[source]
You only lose those guarantees if and only if the code within the unsafe block violates the rules of the Rust language.

Normally in safe code you can’t violate the language rules because the compiler enforces various rules. In unsafe mode, you can do several things the compiler would normally prevent you from doing (e.g. dereferencing a naked pointer). If you uphold all the preconditions of the language, safety is preserved.

What’s unfortunate is that the rules you are required to uphold can be more complex than you might anticipate if you’re trying to use unsafe to write C-like code. What’s fortunate is that you rarely need to do this in normal code and in SIMD which is what the snippet is representing there’s not much danger of violating the rules.

23. iknowstuff ◴[] No.43382849{5}[source]
tf are you talking about
replies(2): >>43382906 #>>43382911 #
24. dwattttt ◴[] No.43382883{5}[source]
The places where undefined behaviour can occur are also limited in scope; you insist that that part isn't true, because operations outside those unsafe blocks can impact their safety.

That's only true at the same level of scrutiny as "all C operations can cause undefined behaviour, regardless of what they are", which I find similarly shallow.

25. steveklabnik ◴[] No.43382906{6}[source]
They are (rudely) talking about https://news.ycombinator.com/item?id=43382369
26. dwattttt ◴[] No.43382911{6}[source]
In a more helpful framing: safe Rust code doesn't need to worry about its own correctness, it just is.

Unsafe code can be incorrect (or unsound), and needs to be careful about it. Part of being careful is that safe code can call the unsafe code in a way that triggers that unsoundness; in that way, safe code can cause undefined behaviour in unsafe code.

It's not always the case that this is possible; there are unsafe blocks that don't need to depend on safe code for its correctness.

27. gf000 ◴[] No.43383088{3}[source]
This is technically correct, but a bit pedantic.

Sure, you can technically just write your own vulnerability for your own program and inject it at an unsafe and see the whole world crumble... but the exact same is true for any form of FFI calls in any language. Is Java memory safe? Yeah, just because I can grab a random pointer and technically break anything I want won't change that.

The fact that a memory vulnerability error may either appear at no place at all OR at the couple hundred lines of code thorough the whole project is a night and day difference.

28. gf000 ◴[] No.43383131{5}[source]
Even innocent looking C code can be chock-full of UBs that can invalidate your "local reasoning" capabilities. So, not even close.
replies(1): >>43383379 #
29. onnimonni ◴[] No.43383159[source]
Would someone with more experience be able to explain to me why can't these operations be "safe"? What is blocking rust from producing the same machine code in a "safe" way?
replies(4): >>43383264 #>>43383268 #>>43383285 #>>43383292 #
30. gf000 ◴[] No.43383190{5}[source]
Rust is plenty fast, in fact there are countless examples of safe rust that will trivially beat out C in performance due to no aliasing, enabling better vectorization among others. Let alone being simply a more expressive language and allowing writing better optimizations (e.g. small strings, vs the absolutely laughable c-strings that perform terribly, but also you can actually get away with sharing more stuff in memory vs doing defensive copies everywhere because it is safe to do so, etc)

And there is not many things we have statistics on in CS, but memory vulnerabilities being absolutely everywhere in unsafe languages, and Rust cleaning up the absolute majority of them even when only the new parts are written in Rust are some of the few we do know, based on actual, real life projects at Google/Microsoft among others.

A memory safe low-level language is as novel as it gets. Rust is absolutely not just hype, it actually delivers and you might want to get on with the times.

replies(1): >>43385295 #
31. pdimitar ◴[] No.43383207[source]
Where did you even get that weird extreme take from?

O_o

32. vlovich123 ◴[] No.43383264{3}[source]
Those specific functions are compiler builtin vector intrinsics. The main reason is that they can easily read past ends of arrays and have type safety and aliasing issues.

By the way, the rust compiler does generate such code because under the hood LLVM runs an autovectorizer when you turn on optimizations. However, for the autovectorizer to do a good job you have to write code in a very special way and you have no way of controlling whether or not it kicked in and once it did that it did a good job.

There’s work on creating safe abstractions (that also transparently scale to the appropriate vector instruction), but progress on that has felt slow to me personally and it’s not available outside nightly currently.

replies(1): >>43385330 #
33. rybosome ◴[] No.43383265[source]
I believe the post you are replying to was referring to the fact that you could take actions in that unsafe block that would compromise the guarantees of rust; eg you could do something silly, leave the unsafe block, then hit an “impossible” condition later in the program.

A simple example might be modifying a const value deep down in some class, where it only becomes apparent later in the program’s execution. Hence their analogy of the wet dog in a clean room - whatever beliefs you have about the structure of memory in your entire program, and guaranteed by the compiler, could have been undone by a rogue unsafe.

replies(1): >>43396097 #
34. ◴[] No.43383268{3}[source]
35. NobodyNada ◴[] No.43383285{3}[source]
Rust's raw pointers are more-or-less equivalent to C pointers, with many of the same types of potential problems like dangling pointers or out-of-bounds access. Rust's references are the "safe" version of doing pointer operations; raw pointers exist so that you can express patterns that the borrow checker can't prove are sound.

Rust encourages using unsafe to "teach" the language new design patterns and data structures; and uses this heavily in its standard library. For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.

Think of unsafe not as "this code is unsafe", but as "I've proven this code to be safe, and the borrow checker can rely on it to prove the safety of the rest of my program."

replies(1): >>43385326 #
36. adgjlsfhk1 ◴[] No.43383292{3}[source]
often the unsafe code is at the edges of the type system. e.g. sometimes the proof of safety is that someone read the source code of the c library that you are calling out to. it's not useful to think of machine code as safe or unsafe. safety often refers to whether the types of your data match the lifetime dataflow.
37. wavemode ◴[] No.43383379{6}[source]
Care to share an example?
replies(3): >>43383437 #>>43383963 #>>43385097 #
38. capitainenemo ◴[] No.43383437{7}[source]
sorting floats with NaN ? almost anything involving threading and mutation where people either don't realise how important locks are, or don't realise their code has suddenly been threaded?
39. xboxnolifes ◴[] No.43383637[source]
If you have 1 unsafe block, and you have a memory related crash/issue, where in your Rust code do you think the root cause is located?

This isn't a wet dog in a cleanroom. This is cleanroom complex that has a very small outhouse that is labeled as dangerous.

40. benjiro ◴[] No.43383773[source]
Where are you more likely get a burglar enter your home? Windows ... Where are you more likely to develop cracks in your walls? Windows ... Where are you more likely to develop leaks? Windows (especially roof windows!)...

Sorry but horrible comparison ;)

If you need to rely on unsafe in a memory-safe language for performance reasons, then there is a issue with the language compiler at that point, that needs to be fixed. Simple as that.

The whole memory-safety is the bread and butter of the language, the moment you start to bypass it for faster memory operations, you can start doing the same in any other language. I mean, your literally bypassing the main selling point of the language. \_00_/

replies(2): >>43383838 #>>43384027 #
41. pdimitar ◴[] No.43383793{5}[source]
You sound pretty biased, gotta tell you. That snark is not helping any argument you think you might be doing -- and you are not doing any; you are kind of just making fun of Rust, which is pretty boring and uninformative for any reader.

From my past experiences with Rust, the team never had to think about data race once, or mutable volatile globals. And we all there suffered from those decades ago with C and sometimes C++ as well.

You like those and don't want to migrate? More power to ya! But badmouthing Rust with what seem fairly uninformed comments is just low. Inform yourself first.

42. LoganDark ◴[] No.43383811[source]
> Isn't it the case that once you use unsafe even a single time, you lose all of Rust's nice guarantees?

No, not even close. You only lose Rust's safety guarantees when your unsafe code causes Undefined Behavior. Unsafe code that can be made to cause UB from Safe Rust is typically called unsound, and unsafe code that cannot be made to cause UB from Safe Rust is called sound. As long as your unsafe code is sound, then it does not break any of Rust's guarantees.

For example, unsafe code can still use slices or references provided by Safe Rust, because those are always guaranteed to be valid, even in an unsafe block. However, if from inside that unsafe block you then go on to manufacture an invalid slice or reference using unsafe functions, that is UB and you lose Rust's safety guarantees because of the UB.

43. pdimitar ◴[] No.43383838{3}[source]
> If you need to rely on unsafe in a memory-safe language for performance reasons, then there is a issue with the language compiler at that point, that needs to be fixed. Simple as that.

It actually means "Rust needs to interface with many other systems that are not as stringent as it". Your interpretation has nothing to do with what's actually going on and I am surprised you misinterpreted the situation as hugely as you did.

...And even if everything was written in Rust, `unsafe` would still be needed because the lower you get [to the kernel] you get more and more non-determinism at places.

This "all or nothing" attitude is boring and tiring. We all wish things were super simple, black and white, and all-or-nothing. They are not.

44. nicoburns ◴[] No.43383951{5}[source]
You're a lot more limited more limited to the kinds of APIs you can safely encapsulate in C. For example, you can't safely encapsulate an interface that shares memory between the library and the caller in C. So you're forced into either:

- Exposing an unsafe API and relying on the caller to manually uphold invariants

- Doing things like defensive copying at a performance cost

In many cases Rust gives you the best of both worlds: sharing memory liberally while still having the compiler enforce correctness.

replies(1): >>43392262 #
45. masfuerte ◴[] No.43383963{7}[source]

   int average(int x, int y) {
       return (x+y)/2;
   }
replies(3): >>43385221 #>>43392246 #>>43445900 #
46. unrealhoang ◴[] No.43384027{3}[source]
So static typing is stupid because at the end of the line your program must interface with stream of untyped bits (i/o)?

Once you can internalize that you could unlock the power of encapsulation.

47. EnnEmmEss ◴[] No.43384238[source]
Jason Ordendorff's talk [1] was probably the first time I truly grokked the concept of unsafe in Rust. The core idea behind unsafe in Rust is not to provide an escape from the guarantees provided by rust. It's to isolate the places where you have no choice but to break the guarantees and rigorously code/test the boundaries there so that anything wrapping the unsafe code can still provide the guarantees.

[1]: https://www.youtube.com/watch?v=rTo2u13lVcQ

48. andyferris ◴[] No.43384281[source]
Rust isn't the only memory-safe language.

As soon as you start playing with FFI and raw pointers in Python, NodeJS, Julia, R, C#, etc you can easily loose the nice memory-safety properties of those languages - create undefined behavior, segfaults, etc. I'd say Rust is a lot nicer for checking unsafe correctness than other memory-safe languages, and also makes it easier to dip down to systems-level programming, yet it seems to get a lot of hate for these features.

replies(1): >>43386111 #
49. lambda ◴[] No.43384869{5}[source]
There is no distinction between safe and unsafe code in C, so it's not possible to make that same distinction that you can in Rust.

And even if you try to provide some kind of safer abstraction, you're limited by the much more primitive type system, that can't distinguish between owned types, unique borrows, and shared borrows, nor can it distinguish thread safety properties.

So you're left to convention and documentation for that kind of information, but nothing checking that you're getting it right, making it easy to make mistakes. And even if you get it right at first, a refactor could change your invariants, and without a type system enforcing them, you never know until someone comes along with a fuzzer and figures out that they can pwn you

replies(1): >>43392234 #
50. lambda ◴[] No.43385047{5}[source]
What Rust provides is a way to build safe abstractions over unsafe code.

Rust's type system (including ownership and borrowing, Sync/Send, etc), along with it's privacy features (allowing types to have private fields that can only be accessed by code in the module that defined them) allows you to create fully safe interfaces around code that uses unsafe; there is provably no combination of uses of the interface which lead to undefined behavior.

Now, yeah, it's possible to also use unsafe in Rust just for applying a local optimisation. And that has fewer benefits than a fully encapsulated safe interface, though is still easier to audit for potential UB than C.

So you're right that it's on a continuum, but the distinction between safe and unsafe code means you can more easily find the specific places where UB could occur, and the encapsulation and type system makes it possible to create safe abstractions over unsafe code.

51. pests ◴[] No.43385097{7}[source]
https://www.ioccc.org/years.html
52. rat87 ◴[] No.43385190[source]
My understanding is that the user who writes an unsafe block in a safe function is responsible for making sure that it doesn't do anything wrong to mess up the safety and that the function isn't lying about exposing a safe interface. I think at one point before rust 1.0 there was even a suggestion to rename it trustme. Of course users can easily mess up but the point is to minimize the use of unsafe so its easier to check and create interfaces that can be used safely
53. throwaway2037 ◴[] No.43385221{8}[source]
I assume you are hinting at 'int' is signed here? And, that signed overflow is UB in C? Real question: Ignoring what the ISO C language spec says, are there any modern hardware platforms (say: ARM64 and X86-64) that do not use two's complement to implement signed integers? I don't know any. As I understand, two's complement correctly supports overflow for signed arithmetic.

I might be old, but more than 10 years ago, hardly anyone talked about UB in C and C++ programming. In the last 10 years, it is all the rage, but seems to add very little to the conversation. For example, if you program C or C++ with the Win32 API, there are loads of weird UB-ish things that seem to work fine.

replies(3): >>43385280 #>>43385345 #>>43385566 #
54. steveklabnik ◴[] No.43385280{9}[source]
> Ignoring what the ISO C language spec says, are there any modern hardware platforms (say: ARM64 and X86-64) that do not use two's complement to implement signed integers?

This is not how compilers work. Optimization happens based on language semantics, not on what platforms do.

55. throwaway2037 ◴[] No.43385295{6}[source]

    > absolutely laughable c-strings that perform terribly
Not much being said here in 2025. Any good project will quickly switch to a tiny structure that holds char* and strlen. There are plenty of open source libs to help you.
replies(1): >>43386634 #
56. throwaway2037 ◴[] No.43385326{4}[source]
Why does Vec need to have any unsafe code? If you respond "speed"... then I will scratch my chin.

    > For example, the Vec type is a wrapper around a raw pointer, length, and capacity; and exposes a safe interface allowing you to create, manipulate, and access vectors with no risk of pointer math going wrong -- assuming the people who implemented the unsafe code inside of Vec didn't make a mistake, the external, safe interface is guaranteed to be sound no matter what external code does.
I'm sure you already know this, but you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.
replies(1): >>43385620 #
57. throwaway2037 ◴[] No.43385330{4}[source]

    > However, for the autovectorizer to do a good job you have to write code in a very special way
Can you give an example of this "very special way"?
replies(1): >>43386642 #
58. jandrewrogers ◴[] No.43385345{9}[source]
At least in recent C++ standards, integers are defined as two’s complement. As a practical matter what hardware like that may still exist doesn’t have a modern C++ compiler, rendering it a moot point.

UB in C is often found where different real hardware architectures had incompatible behavior. Rather than biasing the language for or against different architectures they left it to the compiler to figure out how to optimize for the cases where instruction behavior diverge. This is still true on current architectures e.g. shift overflow behavior which is why shift overflow is UB.

59. throwaway2037 ◴[] No.43385347[source]

    > What language is the JVM written in?
I am pretty sure it is C++.

I like your second paragraph. It is well written.

replies(1): >>43386157 #
60. rat87 ◴[] No.43385422[source]
I don't think what something was written in should count. Baring bugs it should still be memory safe. But I believe JVM has ffi and as soon as you use ffi you risk messing up that memory safety.
replies(1): >>43386030 #
61. oneshtein ◴[] No.43385566{9}[source]
AI rewrote to avoid undefined behavior:

  int average(int x, int y) {
    long sum = (long)x + y;
    if(sum > INT_MAX || sum < INT_MIN)
        return -1; // or any value that indicates an error/overflow
  
    return (int)(sum / 2);
  }
replies(5): >>43386128 #>>43386231 #>>43386269 #>>43386613 #>>43396071 #
62. NobodyNada ◴[] No.43385620{5}[source]
Rust doesn't have compiler-magic support for anything like a vector. The language has syntax for fixed-sized arrays on the stack, and it supports references to variable-length slices; but it has no magic for constructing variable-length slices (e.g. C++'s `new[]` operator). In fact, the compiler doesn't really "know" about the heap at all.

Instead, all that functionality is written as Rust code in the standard library, such as Vec. This is what I mean by using unsafe code to "teach" the borrow checker: the language itself doesn't have any notion of growable arrays, so you use unsafe to define its semantics and interface, and now the borrow checker understands growable arrays. The alternative would be to make growable arrays some kind of compiler magic, but that's both harder to implement correctly and not generalizable.

> you can do exactly the same in C by using an opaque pointer to protect the data structure. Then you write a bunch of functions that operate on the opaque pointer. You can use assert() to protect against unreasonable inputs.

That's true and that's a great design pattern in C as well. But there are some crucial differences:

- Rust has no undefined behavior outside of unsafe blocks. This means you only need to audit unsafe blocks (and any invariants they assume) to be sure your program is UB-free. C does not have this property even if you code defensively at interface boundaries.

- In Rust, most of the invariants can be checked at compile time; the need for runtime asserts is less than in C.

- C provides no way to defend against dangling pointers without additional tooling & runtime overhead. For instance, if I write a dynamic vector and get a pointer to the element, there's no way to prevent me from using that pointer after I've freed the vector, or appended an element causing the container to get reallocated elsewhere.

Rust isn't some kind of silver bullet where you feed it C-like code and out comes memory safety. It's also not some kind of high-overhead garbage collected language where you have to write unsafe whenever you care about performance. Rather, Rust's philosophy is to allow you to define fundamental operations out of small encapsulated unsafe building blocks, and its magic is in being able to prove that the composition of these operations is safe, given the soundness of the individual components.

The stdlib provides enough of these building blocks for almost everything you need to do. Unsafe code in library/systems code is rare and used to teach the language of new patterns or data structures that can't be expressed solely in terms of the types exposed by the stdlib. Unsafe in application-level code is virtually never necessary.

63. j-krieger ◴[] No.43385656[source]
> Isn't it the case that once you use unsafe even a single time, you lose all of Rust's nice guarantees

Inside that block, both yes and no. You have to enforce those nice guarantees yourself. Code that violates it will still crash.

64. sunshowers ◴[] No.43386030{3}[source]
Does it help to think of "safe Rust" as a language that's written in "unsafe Rust"? That's basically what it is.
65. johnisgood ◴[] No.43386111[source]
Ada is even much more better at checking for correctness. It needs to be talked about more. "Safer than C" has been Ada, people did not know this before they jumped on the Rust bandwagon.
66. Jaxan ◴[] No.43386128{10}[source]
I’m not convinced that solution is much better. It can be improved to x/2 + y/2 (which still gives the wrong answer if both inputs are odd).
67. pjmlp ◴[] No.43386156[source]
Depends on which JVM you are talking about, some are 100% Java, some are a mix of Java and C, others are a mix of Java and C++, in all cases a bit of Assembly as well.
68. pjmlp ◴[] No.43386157{3}[source]
Depends on which JVM you are talking about, some are 100% Java, some are a mix of Java and C, others are a mix of Java and C++, in all cases a bit of Assembly as well.
replies(1): >>43386246 #
69. ◴[] No.43386165[source]
70. josefx ◴[] No.43386231{10}[source]
> long sum = (long)x + y;

There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.

https://www.gnu.org/software/libc/manual/html_node/Range-of-...

> return -1; // or any value that indicates an error/overflow

-1 is a perfectly valid average for various inputs. You could return the larger type to encode an error value that is not a valid output or just output the error and average in two distinct variables.

AI and C seem like a match made in hell.

replies(1): >>43389904 #
71. throwaway2037 ◴[] No.43386246{4}[source]
You are right. I should have been more clear. I am talking about the bog standard one that most people use from Oracle/OpenJDK. A long time back it was called "HotSpot JVM". That one has source code available on GitHub. It is mostly C++ with a little bit of C and assembly.
replies(1): >>43386336 #
72. throwaway2037 ◴[] No.43386269{10}[source]
I don't know why this answer was downvoted. It adds valuable information to this discussion. Yes, I know that someone already pointed out that sizeof(int) is not guaranteed on all platforms to be smaller than sizeof(long). Meh. Just change the type to long long, and it works well.
replies(4): >>43386284 #>>43386391 #>>43389387 #>>43396082 #
73. gf000 ◴[] No.43386284{11}[source]
It literally returns a valid output value as an error.
replies(1): >>43389527 #
74. pjmlp ◴[] No.43386336{5}[source]
Define mostly, https://github.com/openjdk/jdk

- Java 74.1%

- C++ 14.0%

- C 7.9%

- Assembly 2.7%

And those values have been increasing for Java with each OpenJDK release.

replies(1): >>43386648 #
75. josefx ◴[] No.43386391{11}[source]
> Meh. Just change the type to long long, and it works well.

C libraries tend to support a lot of exotic platforms. zlib for example supports Unicos, where int, long int and long long int are all 64 bits large.

76. immibis ◴[] No.43386613{10}[source]
We're about to see a huge uptick in bugs worldwide, aren't we?
77. saagarjha ◴[] No.43386634{7}[source]
I take that you consider most major projects written in C to not be "good"?
replies(1): >>43389500 #
78. saagarjha ◴[] No.43386642{5}[source]
For example many autovectorizers get upset if you put control flow in your loop
79. saagarjha ◴[] No.43386648{6}[source]
JDK≠JVM
replies(1): >>43386750 #
80. pjmlp ◴[] No.43386750{7}[source]
If you are only talking about libjvm.so you would be right, then again that alone won't do much help for Java developers.
replies(1): >>43421298 #
81. GTP ◴[] No.43386840{5}[source]
Which is just a convoluted way of saying that it is possible to write bugs in any language. Still, it's undeniable that some languages make a better job at helping you avoid certain bugs than others.
82. andrewchambers ◴[] No.43387402[source]
It's more like letting a wet dog who you are watching closely quickly pass from your front door to the shower.
83. tmtvl ◴[] No.43387667[source]
Is there such a boundary? How do you know a function doesn't call unsafe code without looking at every function called in it, and every function those functions call, and so on?

The usual retort to these questions is 'well, the standard library uses unsafe code, so everything would need a disclaimer that it uses unsafe code, so that's a useless remark to make', but the basic issue still remains that the only clear boundary is whether a function 'contains' unsafe code, not whether a function 'calls' unsafe code.

If Rust did not have a mechanism to use external code then it would be fine because the only sources of unsafe code would be either the application itself or the standard library so you could just grep for 'unsafe' to find the boundaries.

replies(3): >>43389854 #>>43390196 #>>43396112 #
84. NobodyNada ◴[] No.43389387{11}[source]
Copypasting a comment into an LLM, and then copypasting its response back is not a useful contribution to a discussion, especially without even checking to be sure it got the answer right. If I wanted to know what an LLM had to say, I can go ask it myself; I'm on HN because I want to know what people have to say.
replies(1): >>43389546 #
85. sophacles ◴[] No.43389500{8}[source]
Most major software projects are not good, no matter what language.
86. oneshtein ◴[] No.43389527{12}[source]
An error value is valid output in both cases.
replies(1): >>43393545 #
87. ◴[] No.43389546{12}[source]
88. steveklabnik ◴[] No.43389854{3}[source]
> How do you know a function doesn't call unsafe code without looking at every function called in it, and every function those functions call, and so on?

The point is that you don't need to. The guarantees compose.

> The usual retort to these questions is 'well, the standard library uses unsafe code

It's not about the standard library, it's much more fundamental than that: hardware is not memory safe to access.

> If Rust did not have a mechanism to use external code then it would be fine

This is what GC'd languages with runtimes do. And even they almost always include FFI, which lets you call into arbitrary code via the C ABI, allowing for unsafe things. Rust is a language intended to be used at the bottom of the stack, and so has more first-class support, calling it "unsafe" instead of FFI.

89. cesarb ◴[] No.43389904{11}[source]
> There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.

That used to be the case for 32-bit platforms, but most 64-bit platforms in which GNU libc runs use the LP64 model, which has 32-bit int and 64-bit long. That documentation seems to be a bit outdated.

(One notable 64-bit platform which uses 32-bit for both int and long is Microsoft Windows, but that's not one of the target platforms for GNU libc.)

90. cesarb ◴[] No.43390196{3}[source]
> Is there such a boundary? How do you know a function doesn't call unsafe code without looking at every function called in it, and every function those functions call, and so on?

Yes, there is a boundary, and usually it's either the function itself, or all methods of an object. For instance, a function I wrote recently goes somewhat like this:

  fn read_unaligned_u64_from_byte_slice(src: &[u8]) -> u64 {
    assert_eq!(src.len(), size_of::<u64>());
    unsafe { std::ptr::read_unaligned(src.as_ptr().cast::<u64>()) }
  }
The read_unaligned function (https://doc.rust-lang.org/std/ptr/fn.read_unaligned.html) has two preconditions which have to be checked manually. When doing so, you'll notice that the "src" argument must have at least 8 bytes for these preconditions to be met; the "assert_eq!()" call before that unsafe block ensures that (it will safely panic unless the "src" slice has exactly 8 bytes). That is, my "read_unaligned_u64_from_byte_slice" function is safe, even though it calls unsafe code; the function is the boundary between safe and unsafe code. No callers of that function have to worry that it calls unsafe code in its implementation.
91. uecker ◴[] No.43392234{6}[source]
There is definitely a distinction between safe and unsafe code in C, it is just not a simple binary distinction. But this does not make it impossible to screen C for unsafe constructions and it also does not mean that detecting unsafe issues in Rust is always trivial.
92. uecker ◴[] No.43392246{8}[source]
But this is also easy to protect against if you use the tools available to C programmers. It is part of the Rust hype that we would be completely helpless here, but this is far from the truth.
93. uecker ◴[] No.43392262{6}[source]
Rust is better at this yes, but the practical advantage is not necessarily that huge.
94. MaxBarraclough ◴[] No.43393545{13}[source]
The code is unarguably wrong.

average(INT_MAX,INTMAX) should return INT_MAX, but it will get that wrong and return -1.

average(0,-2) should not return a special error-code value, but this code will do just that, making -1 an ambiguous output value.

Even its comment is wrong. We can see from the signature of the function that there can be no value that indicates an error, as every possible value of int may be a legitimate output value.

It's possible to implement this function in a portable and standard way though, along the lines of [0].

[0] https://stackoverflow.com/a/61711253/ (Disclosure: this is my code.)

replies(1): >>43396843 #
95. umanwizard ◴[] No.43396071{10}[source]
Please stop posting AI-generated content to HN. It’s clear the majority of users hate it, given that it gets swiftly downvoted every time it’s posted.
96. umanwizard ◴[] No.43396082{11}[source]
I always downvote all AI-generated content regardless of whether it’s right or wrong, because I would like to discourage people from posting it.
97. umanwizard ◴[] No.43396097{3}[source]
Rust doesn’t have classes, nor can const values be modified, even in unsafe code. (did you mean “immutable”?)
98. umanwizard ◴[] No.43396112{3}[source]
The point of rust isn’t to formally prove that there are no bugs. It’s just to make writing certain classes of bugs harder. That’s what people are missing when they point out that yes, it’s possible to circumvent safety mechanisms. It’s a strawman: bulletproof, guaranteed security simply isn’t a design goal of rust.
99. MaxBarraclough ◴[] No.43396843{14}[source]
Too late for me to edit: as josefx pointed out, it also fails to properly address the undefined behavior. The sums INT_MAX + INT_MAX and INT_MIN + INT_MIN may still overflow despite being done using the long type.

That won't occur on an 'LP64' platform, [0] but we should aim for proper portability and conformance to the C language standard.

[0] https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_m...

100. saagarjha ◴[] No.43421298{8}[source]
That is what most people are talking about when they are discussing the JVM, yes
101. uecker ◴[] No.43445900{8}[source]
You can tell a C compiler to trap or wrap around on overflow, or you use checked arithmetic to test explicitly for overflow.