Most active commenters
  • cmrdporcupine(5)
  • pclmulqdq(4)
  • dralley(4)
  • jchw(4)
  • timschmidt(3)
  • umanwizard(3)

←back to thread

Zlib-rs is faster than C

(trifectatech.org)
341 points dochtman | 67 comments | | HN request time: 2.146s | source | bottom
Show context
YZF ◴[] No.43381858[source]
I found out I already know Rust:

        unsafe {
            let x_tmp0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x10);
            xmm_crc0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x01);
            xmm_crc1 = _mm_xor_si128(xmm_crc1, x_tmp0);
            xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_crc0);
Kidding aside, I thought the purpose of Rust was for safety but the keyword unsafe is sprinkled liberally throughout this library. At what point does it really stop mattering if this is C or Rust?

Presumably with inline assembly both languages can emit what is effectively the same machine code. Is the Rust compiler a better optimizing compiler than C compilers?

replies(30): >>43381895 #>>43381907 #>>43381922 #>>43381925 #>>43381928 #>>43381931 #>>43381934 #>>43381952 #>>43381971 #>>43381985 #>>43382004 #>>43382028 #>>43382110 #>>43382166 #>>43382503 #>>43382805 #>>43382836 #>>43383033 #>>43383096 #>>43383480 #>>43384867 #>>43385039 #>>43385521 #>>43385577 #>>43386151 #>>43386256 #>>43386389 #>>43387043 #>>43388529 #>>43392530 #
Aurornis ◴[] No.43381931[source]
Using unsafe blocks in Rust is confusing when you first see it. The idea is that you have to opt-out of compiler safety guarantees for specific sections of code, but they’re clearly marked by the unsafe block.

In good practice it’s used judiciously in a codebase where it makes sense. Those sections receive extra attention and analysis by the developers.

Of course you can find sloppy codebases where people reach for unsafe as a way to get around Rust instead of writing code the Rust way, but that’s not the intent.

You can also find die-hard Rust users who think unsafe should never be used and make a point to avoid libraries that use it, but that’s excessive.

replies(10): >>43381986 #>>43382095 #>>43382102 #>>43382323 #>>43385098 #>>43385651 #>>43386071 #>>43386189 #>>43386569 #>>43392018 #
1. timschmidt ◴[] No.43381986[source]
Unsafe is a very distinct code smell. Like the hydrogen sulfide added to natural gas to allow folks to smell a gas leak.

If you smell it when you're not working on the gas lines, that's a signal.

replies(6): >>43382188 #>>43382239 #>>43384810 #>>43385163 #>>43385670 #>>43386705 #
2. cmrdporcupine ◴[] No.43382188[source]
Look, no. Just go read the unsafe block in question. It's just SIMD intrinsics. No memory access. No pointers. It's unsafe in name only.

No need to get all moral about it.

replies(3): >>43382234 #>>43382266 #>>43382480 #
3. kccqzy ◴[] No.43382234[source]
By your line of reasoning, SIMD intrinsics functions should not be marked as unsafe in the first place. Then why are they marked as unsafe?
replies(4): >>43382276 #>>43382451 #>>43384972 #>>43385883 #
4. mrob ◴[] No.43382239[source]
There's no standard recipe for natural gas odorant, but it's typically a mixture of various organosulfur compounds, not hydrogen sulfide. See:

https://en.wikipedia.org/wiki/Odorizer#Natural_gas_odorizers

replies(2): >>43382271 #>>43386386 #
5. timschmidt ◴[] No.43382266[source]
I don't read any moralizing in my previous comment. And it seems to mirror the relevant section in the book:

"People are fallible, and mistakes will happen, but by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."

I hope the SIMD intrinsics make it to stable soon so folks can ditch unnecessary unsafes if that's the only issue.

6. timschmidt ◴[] No.43382271[source]
TIL!
7. cmrdporcupine ◴[] No.43382276{3}[source]
There's no standardization of simd in Rust yet, they've been sitting in nightly unstable for years:

https://doc.rust-lang.org/std/intrinsics/simd/index.html

So I suspect it's a matter of two things:

1. You're calling out to what's basically assembly, so buyer beware. This is basically FFI into C/asm.

2. There's no guarantee on what comes out of those 128-bit vectors after to follow any sanity or expectations, so... buyer beware. Same reason std::mem::transmute is marked unsafe.

It's really the weakest form of unsafe.

Still entirely within the bounds of a sane person to reason about.

replies(3): >>43382389 #>>43382440 #>>43385419 #
8. pclmulqdq ◴[] No.43382389{4}[source]
> they've been sitting in nightly unstable for years

So many very useful features of Rust and its core library spend years in "nightly" because the maintainers of those features don't have the discipline to see them through.

replies(3): >>43382419 #>>43383440 #>>43385204 #
9. cmrdporcupine ◴[] No.43382419{5}[source]
simd and allocator_api are the two that irritate me enough to consider a different language for future systems dev projects.

I don't have the personality or time to wade into committee type work, so I have no idea what it would take to get those two across the finish line, but the allocator one in particular makes me question Rust for lower level applications. I think it's just not going to happen.

If Zig had proper ADTs and something equivalent to borrow checker, I'd be inclined to poke at it more.

replies(1): >>43385115 #
10. steveklabnik ◴[] No.43382440{4}[source]
> There's no standardization of simd in Rust yet

Of safe SIMD, but some stuff in core::arch is stabilized. Here's the first bit called in the example of the OP: https://doc.rust-lang.org/core/arch/x86/fn._mm_clmulepi64_si...

11. CryZe ◴[] No.43382451{3}[source]
They are in the process of marking them safe, which is enabled through the target_feature 1.1 RFC.

In fact, it has already been merged two weeks ago: https://github.com/rust-lang/stdarch/pull/1714

The change is already visible on nightly: https://doc.rust-lang.org/nightly/core/arch/x86/fn._mm_xor_s...

Compared to stable: https://doc.rust-lang.org/core/arch/x86/fn._mm_xor_si128.htm...

So this should be stable in 1.87 on May 15 (Rust's 10 year anniversary since 1.0)

12. SkiFire13 ◴[] No.43382480[source]
SIMD intrinsics are unsafe because they are available only under some CPU features.
13. NobodyNada ◴[] No.43383440{5}[source]
Before I started working with Rust, I spent a lot of time using Swift for systems-y/server-side code, outside of the Apple ecosystem. There is a lot I like about that language, but one of the biggest factors that drove me away was just how fast the Apple team was to add more and more compiler-magic features without considering whether they were really the best possible design. (One example: adding compiler-magic derived implementations of specific protocols instead of an extensible macro system like Rust has.) When these concerns were raised on the mailing lists, the response from leadership was "yes, something like that would be better in the long run, but we want to ship this now." Or even in one case, "yes, that tweak to the design would be better, but we already showed off the old design at the WWDC keynote and we don't want to break code we put in a keynote slide."

When I started working in Rust, I'd want some feature or function, look it up, and find it was unstable, sometimes for years. This was frustrating at first, but then I'd go read the GitHub issue thread and find that there was some design or implementation concern that needed to be overcome, and that people were actively working on it and unwilling to stabilize the feature until they were sure it was the best possible design. And the result of that is that features that do get stabilized are well thought out, generalize, and compose well with everything else in the language.

Yes, I really want things like portable SIMD, allocators, generators, or Iterator::intersperse. But programming languages are the one place I really do want perfect to be the enemy of good. I'd rather it take 5+ years to stabilize features than for us to end up with another Swift or C++.

replies(2): >>43383716 #>>43384703 #
14. grandiego ◴[] No.43383716{6}[source]
> the response from leadership was "yes, something like that would be better in the long run, but we want to ship this now."

Sounds like the Rust's async story.

replies(2): >>43383751 #>>43384178 #
15. steveklabnik ◴[] No.43383751{7}[source]
Async went through years of work before being stabilized. This isn't true.
16. NobodyNada ◴[] No.43384178{7}[source]
Rust's async model was shipped as an MVP, not in the sense of "this is a bad design and we just want to ship it"; but rather, "we know this is the first step of the eventual design we want, so we can commit to stabilizing these parts of it now while we work on the rest." There's ongoing work to bring together the rest of the pieces and ergonomics on top of that foundational model; async closures & trait methods were recently stabilized, and work towards things like pin ergonomics & simplifying cheap clones like Rc are underway.

Rust uses this strategy of minimal/incremental stabilization quite often (see also: const generics, impl Trait); the difference between this and what drove me away from Swift is that MVPs aren't shipped unless it's clear that the design choices being made now will still be the right choices when the rest of the feature is ready.

replies(1): >>43384296 #
17. cmrdporcupine ◴[] No.43384296{8}[source]
IMO shipping async without a standardized API for basic common async facilities (like thread spawning, file/network I/O) was a mistake and basically means that tokio has eaten the whole async side of the language.

Why define runtime independence as a goal, but then make it impossible to write runtime agnostic crates?

(Well, there's the "agnostic" crate at least now)

replies(1): >>43384821 #
18. pclmulqdq ◴[] No.43384703{6}[source]
My personal opinion is that if you want to contribute a language feature, shit or get off the pot. Leaving around a half-baked solution actually raises the required effort for someone who isn't you to add that feature (or an equivalent) because they now have to either (1) ramp up on the spaghetti you wrote or (2) overcome the barrier of explaining why your thing isn't good enough. Neither of those two things are fun (which is important since writing language features is volunteer work) and those things come in the place of doing what is actually fun, which is writing the relevant code.

The fact that the Rust maintainers allow people to put in half-baked features before they are fully designed is the biggest cultural failing of the language, IMO.

replies(1): >>43384769 #
19. dralley ◴[] No.43384769{7}[source]
>The fact that the Rust maintainers allow people to put in half-baked features before they are fully designed is the biggest cultural failing of the language, IMO.

In nightly?

Hard disagree. Letting people try things out in the real world is how you avoid half-baked features. Easy availability of nightly compilers with unstable features allows way more people to get involved in the pre-stabilization polishing phase of things and raise practical concerns instead of theoretical ones.

C++ takes the approach of writing and nitpicking whitepapers for years before any implementations are ready and it's hard to see how that has led to better outcomes relatively speaking.

replies(1): >>43384818 #
20. throwaway150 ◴[] No.43384810[source]
> Like the hydrogen sulfide added to natural gas to allow folks to smell a gas leak.

I am 100% sure that the smell they add to natural gas does not smell like rotten eggs.

replies(2): >>43385005 #>>43385686 #
21. pclmulqdq ◴[] No.43384818{8}[source]
Yeah, we're going to have to agree to disagree on the C++ flow (really the flow for any language that has a written standard) being better. That flow is usually:

1. Big library/compiler does a thing, and people really like it

2. Other compilers and libraries copy that thing, sometimes putting their own spin on it

3. All the kinks get worked out and they write a white paper

4. Eventually the thing becomes standard

That way, everything in the standard library is something that is fully-thought-out and feature-complete. It also gives much more room for competing implementations to be built and considered before someone stakes out a spot in the standard library for their thing.

replies(2): >>43384839 #>>43386079 #
22. dralley ◴[] No.43384821{9}[source]
>IMO shipping async without a standardized API for basic common async facilities (like thread spawning, file/network I/O) was a mistake and basically means that tokio has eaten the whole async side of the language.

I would argue that it's the opposite of a mistake. If you standardize everything before the ecosystem gets a chance to play with it, you risk making mistakes that you have to live with in perpetuity.

replies(1): >>43385278 #
23. dralley ◴[] No.43384839{9}[source]
>That way, everything in the standard library is something that is fully-thought-out and feature-complete

Are C++ features really that much better thought out? Modules were "standardized" half a decade ago, but the list of problems with actually using them in practice is still pretty damn long to the point where adoption is basically non-existent.

I'm not going to pretend to be nearly as knowledgeable about C++ as Rust, but it seems like most new C++ features I hear about are a bit janky or don't actually fit that well with the rest of the language. Something that tends to happen when designing things in an ivory tower without testing them in practice.

replies(1): >>43384882 #
24. pclmulqdq ◴[] No.43384882{10}[source]
They absolutely are. The reason many features are stupid and janky is because the language and its ecosystem has had almost 40 more years to collect cruft.

The fundamental problem with modules is that build systems for C++ have different abstractions and boundaries. C++ modules are like Rust async - something that just doesn't fit well with the language/system and got hammered in anyway.

The reason it seems like they come from nowhere is probably because you don't know where they come from. Most things go through boost, folly, absl, clang, or GCC (or are vendor-specific features) before going to std.

That being said, it's not just C++ that has this flow for adding features to the language. Almost every other major language that is not Rust has an authoritative specification.

replies(2): >>43384950 #>>43386095 #
25. dralley ◴[] No.43384950{11}[source]
What's a Rust feature that you think suffered from their process in a way that C++ would not have?
26. thrance ◴[] No.43384972{3}[source]
For now the caller has to ensure proper alignment of SMID lines. But in the future a safe API will be made available, once the kinks are ironed out. You can already use it in fact, by enabling a specific compiler feature [1].

[1] https://doc.rust-lang.org/std/simd/index.html

replies(1): >>43385024 #
27. beacon294 ◴[] No.43385005[source]
They add mercaptan which is like 1000x the rotten egg smell of H2S.
replies(1): >>43387099 #
28. anonymoushn ◴[] No.43385024{4}[source]
there are no loads in the above unsafe block, in practice loadu is just as fast as load, and even if you manually use the aligned load or store, you get a crash. it's silly to say that crashes are unsafe.
replies(1): >>43385188 #
29. anonymoushn ◴[] No.43385115{6}[source]
generic simd abstractions are of quite limited use. I'm not sure what's objectionable about the thing Rust has shipped (in nightly) for this, which is more or less the same as the stuff Zig has shipped for this (in a pre-1.0 compiler version).
replies(1): >>43389051 #
30. RossBencina ◴[] No.43385163[source]
Hydrogen Sulfide is highly corrosive (big problem in sewers and associated infrastructure) I highly doubt you would choose to introduce it to gas pipelines on purpose.
31. jchw ◴[] No.43385188{5}[source]
Well, there's a category difference between a crash as in a panic and a crash as in a CPU exception. Usually, "safe" programming limits crashes to language-level error handling, which allows you to easily reason about the nature of crashes: if the type system is sound and your program doesn't use unsafe, the only way it should crash is by panic, and panics are recoverable and leave your program in a well-defined state. By the time you get to a signal handler, you're too late. Admittedly, there are some cases where this is less important than others... misaligned load/store wouldn't lead to a potential RCE, but if it can bring down a program it still is a potential DoS vector.

Of course, in practice, even in Rust, it isn't strictly true that programs without unsafe can't crash with fatal runtime errors. There's always stack overflows, which will crash you with a SIGABRT or equivalent operating system error.

replies(2): >>43387323 #>>43387638 #
32. RossBencina ◴[] No.43385204{5}[source]
> maintainers of those features don't have the discipline to see them through.

This take makes me sad. There are a lot of reasons why an open source contributor may not see something through. "Lack of discipline" is only one of them. Others that come to mind are: lack of time, lack of resources, lack of capability (i.e good at writing code, but struggles to navigate the social complexities of sheparding a significant code change), clinically impaired ability to "stay the course" and "see things through" (e.g. ADHD), or maybe it was a collaborative effort and some of the parties dropped out for any of the aforementioned reasons.

I don't have a solution, but it does kinda suck that open source contribution processes are so dependent on instigators being the responsible party to seeing a change all the way through the pipeline.

33. no_wizard ◴[] No.43385278{10}[source]
Unless you clearly define how and when you’re going to handle removing a standard or updating it to reflect better use cases.

Language designers admittedly should worry about constant breakage but it’s fine to have some churn, and we shouldn’t be so concerned of it that it freezes everything

34. jandrewrogers ◴[] No.43385419{4}[source]
The example here is trivially safe but more general SIMD safety is going to be extremely difficult to analyze for safety, possibly intractable.

For example, it is perfectly legal to dereference a vector pointer that references illegal memory if you mask the illegal addresses. This is a useful trick and common in e.g. idiomatic AVX-512 code. The mask registers are almost always computed at runtime so it would be effectively impossible to determine if a potentially illegal dereference is actually illegal at compile-time.

I suspect we’ll be hand-rolling unsafe SIMD for a long time. The different ISAs are too different, inconsistent, and weird. A compiler that could make this clean and safe is like fusion power, it has always been 10 years away my entire career.

replies(1): >>43385562 #
35. vlovich123 ◴[] No.43385562{5}[source]
Presumably a bounds check on the mask could be done or a safe variant exposed that does that trick under the hood. But yeah I don’t disagree that it’s “safe SIMD” is unlikely to scratch the itch for various applications but hopefully at least it’ll scratch a lot of them enough that the remaining unsafe is reduced.
replies(1): >>43385608 #
36. fooker ◴[] No.43385608{6}[source]
No, a bounds check beats the purpose of simd in these cases
replies(1): >>43390317 #
37. branko_d ◴[] No.43385670[source]
Hydrogen sulfide is highly toxic (it's comparable to carbon monoxide). I doubt anyone in their right mind would put it intentionally in a place where it could leak around humans.

But it can occur naturally in natural gas.

replies(2): >>43385731 #>>43386126 #
38. hyperbrainer ◴[] No.43385686[source]
you are lucky to not have smelled metacarpan (which is what is actually put in). Much much worse than H2S
replies(1): >>43387992 #
39. k1t ◴[] No.43385731[source]
I assume GP was referring to mercaptan, or similar. i.e. Something with a distinctive bad smell.

https://en.m.wikipedia.org/wiki/Methanethiol

40. exDM69 ◴[] No.43385883{3}[source]
They are marked as unsafe because there are hundreds and hundreds of intrinsics, some of which do memory access, some have side effects and others are arithmetic only. Someone would have to individually review them and explicitly mark the safe ones.

There was a bug open about it and the rationale was that no one with the expertise (some of these are quite arcane) was stepping up to do it. (edit: other comments in this thread suggest that this effort is now underway and first changes were committed a few weeks ago)

You can do safe SIMD using std::simd but it is nightly only at this point.

41. pjmlp ◴[] No.43386079{9}[source]
Unfortunely C++ on the last set of revisions has gotten that sequence wrong, many ideas are now PDF implemented before showing up in any compiler years later.

Fully-thought-out and feature-complete is something that since C++17 has been hardly happening.

42. pjmlp ◴[] No.43386095{11}[source]
Since C++17 that anything hardly goes "through boost, folly, absl, clang, or GCC (or are vendor-specific features) before going to std.".
43. littlestymaar ◴[] No.43386126[source]
> Hydrogen sulfide is highly toxic (it's comparable to carbon monoxide)

It's a bad comparison since CO doesn't smell, which is what makes it dangerous, while H2S is detected by our sense of smell at concentrations much lower than the toxic dose (in fact, its biggest dangers comes from the fact that at dangerous concentration it doesn't even smell anything due to our receptors being saturated).

It's not what's being put in natural gas, but it wouldn't be that dangerous if we did.

44. rob74 ◴[] No.43386386[source]
TIL also - until today, I thought it was just "mercaptan". Turns out there are actually two variants of that:

> Ethanethiol (EM), commonly known as ethyl mercaptan is used in liquefied petroleum gas (LPG) and resembles odor of leeks, onions, durian, or cooked cabbage

Methanethiol, commonly known as methyl mercaptan, is added to natural gas as an odorant, usually in mixtures containing methane. Its smell is reminiscent of rotten eggs or cabbage.

...but you can still call it "mercaptan" and be ~ correct in most cases.

45. gigatexal ◴[] No.43386705[source]
Someone mentioned to me that for something as simple as a Linked list you have to use unsafe in rust

Update its how the std lib does it: https://doc.rust-lang.org/src/alloc/collections/linked_list....

replies(5): >>43386891 #>>43387304 #>>43390238 #>>43391048 #>>43392633 #
46. umanwizard ◴[] No.43386891[source]
No you don’t. You can use the standard linked list that is already included in the standard library.

Coming up with these niche examples of things you need unsafe for in order to discredit rust’s safety guarantees is just not interesting. What fraction of programmer time is spent writing custom linked lists? Surely way less than 1%. In most of the other 99%, Rust is very helpful.

replies(1): >>43388348 #
47. taejo ◴[] No.43387099{3}[source]
Mercaptan is a group of compounds, more than one of which are used as gas odorants, so in some places, gas smells of rotten eggs, similar to H2S, while in others gas doesn't smell like that at all, but a quite distinct smell that's reminiscent garlic and durian.
48. ohmygoodniche ◴[] No.43387304[source]
I love how the most common negative thing I hear about rust is how a really uncommon data structure no one should write by hand and should almost always import can be written using the unsafe rust language feature. Meanwhile rust application s tend to in most cases be considerably faster, more correct and more enjoyable to maintain than other languages. Must be a really awesome technology.
49. gpderetta ◴[] No.43387323{6}[source]
As you point out later, a SIGBRT or a SIGBUS would both be perfectly safe and really no different than a panic. With enough infra you could convert them to panic anyway (but probably not worth the effort).
replies(1): >>43388398 #
50. thrance ◴[] No.43387638{6}[source]
Also, AFAIK panics are not always recoverable in Rust. You can compile your project with `panic = "abort"`, in which case the program will quit immediately whenever a panic is encountered.
replies(1): >>43388463 #
51. throwaway150 ◴[] No.43387992{3}[source]
I have. It's worse no doubt. But it's not the smell of rotten eggs. My comment was meant to be tongue-in-cheek to correct the mistake of saying "H2S" in the GP comment.
replies(1): >>43390029 #
52. vikramkr ◴[] No.43388348{3}[source]
I think the point is that it's funny that the standard library has to use unsafe to implement a data structure that's like the second data structure you learn in an intro to CS class
replies(3): >>43388447 #>>43388583 #>>43389181 #
53. jchw ◴[] No.43388398{7}[source]
Well, that's the thing though: in terms of Rust and Go and other safe programming languages, CPU exceptions are not "safe" even though they are not inherently dangerous. The point is that the subset of the language that is safe can't generate them, period. They are not accounted for in safe code.

There are uses for this, especially since some code will run in environments where you can not simply handle it, but it's also just cleaner this way; you don't have to worry about the different behaviors between operating systems and possibly CPU architectures with regards to error recovery if you simply don't generate any.

Since there are these edge cases where it wouldn't be possible to handle faults easily (e.g. some kernel code) it needs to be considered unsafe in general.

replies(1): >>43393003 #
54. Sharlin ◴[] No.43388447{4}[source]
Yeah, but Rust just proves the point here that (doubly) linked lists

a) are surprisingly nontrivial to get right,

b) have almost no practical uses, and

c) are only taught because they're conceptually nice and demonstrate pointers and O(1) vs O(n) tradeoffs.

Note that safe Rust has no problems with singly-linked lists or in general any directed tree structure.

55. jchw ◴[] No.43388463{7}[source]
Sure, but that is beside the point: if you compile code like that, you're intentionally making panics unrecoverable. The nature of panics from the language perspective is not any different; you're still in a well-defined state when it happens.

It's also possible to go a step further and practice "panic-free" Rust where you write code in such a way that it never links to the panic handler. Seems pretty hard to do, but seems like it might be worth it sometimes, especially if you're in an environment where you don't have anything sensible to do on a panic.

56. umanwizard ◴[] No.43388583{4}[source]
Why is it particularly funny?

C has to make a syscall to the kernel which ultimately results in a BIOS interrupt to implement printf, which you need for the hello world program on page 1 of K&R.

Does that mean that C has no abstraction advantage over directly coding interrupts with asm? Of course not.

replies(1): >>43389729 #
57. cmrdporcupine ◴[] No.43389051{7}[source]
The issue is that it's sitting in nightly for years. Many many many years.

I don't write software targetting nightly, for good reason.

58. tux3 ◴[] No.43389181{4}[source]
No, that's how the feature is supposed to work.

You design an abstraction which is unsafe inside, and exposes a safe API to users. That is really how unsafe it meant to be used.

Of course the standard library uses unsafe. This is where you want unsafe to be, not in random user code. That's what it was made for.

59. cesarb ◴[] No.43389729{5}[source]
> C has to make a syscall to the kernel which ultimately results in a BIOS interrupt to implement printf,

That's not the case since the late 1990s. Other than during early boot, nobody calls into the BIOS to output text, and even then "BIOS interrupt" is not something normally used anymore (EFI uses direct function calls through a function table instead of going through software interrupts).

What really happens in the kernel nowadays is direct memory access and direct manipulation of I/O ports and memory mapped registers. That is, all modern operating systems directly manipulate the hardware for text and graphics output, instead of going through the BIOS.

replies(1): >>43389918 #
60. umanwizard ◴[] No.43389918{6}[source]
Thanks for the information (I mean that genuinely, not sarcastically — I do really find it interesting). But it doesn’t really impact my point.
61. hyperbrainer ◴[] No.43390029{4}[source]
If that is the case (and I have no reason to believe otherwise), I apologise. Should work on detecting tone better.
62. estebank ◴[] No.43390238[source]
Note that that is a doubly linked list, because it is a "soup of ownership" data structure. A singly linked list has clear ownership so it can be modelled in safe Rust.

On modern aschitectures you shouldn't use either unless you have an extremely niche use-case. They are not general use data structures anymore in a world where cache locality is a thing.

63. vlovich123 ◴[] No.43390317{7}[source]
Not necessarily if you can hoist the bounds check outside of the loop somehow.
64. miki123211 ◴[] No.43391048[source]
This is far less of a problem than it would be in a C-like language, though.

You can implement that linked list just once, audit the unsafe parts extensively, provide a fully safe API to clients, and then just use that safe API in many different places. You don't need thousands of project-specific linked list reimplementations.

65. all2well ◴[] No.43392633[source]
Doesn’t Arc and Weak work for doubly linked lists? Rust docs recommend Weak as a way to break pointer cycles: https://doc.rust-lang.org/std/sync/struct.Arc.html#breaking-...
66. comex ◴[] No.43393003{8}[source]
That’s largely true, but there are some exceptions (pun not intended).

In Rust, the CPU exception resulting from a stack overflow is considered safe. The compiler uses stack probing to ensure that as long as there is at least one page of unmapped memory below the stack (guard page), the program will reliably fault on it rather than continuing to access memory further below. In most environments it is possible to set up a guard page, including Linux kernel code if CONFIG_VMAP_STACK is enabled. But there are other environments where it’s not, such as WebAssembly and some microcontrollers. In those environments, the backend would have to add explicit checks to function prologs to ensure enough stack is available. I say “would have to”, not “does”: I’ve heard that on at least the microcontrollers, there are no such checks and Rust is just unsound at the moment. Not sure about WebAssembly.

Meanwhile, Go uses CPU exceptions to handle nil dereferences.

replies(1): >>43393106 #
67. jchw ◴[] No.43393106{9}[source]
Yeah, I glossed over the Rust stack overflow case. I don't know why: Literally two parent comments up I did bother to mention it.

That said, I actually entirely forgot Go catches nil derefs in a segfault handler. I guess it's not a big deal since Go isn't really suitable for free-standing environments where avoiding CPU exceptions is sometimes more useful, so there's no particular reason why the runtime can't rely on it.