Perhaps it is faster than already-existing implementations, sure, but not "faster than C", and it is odd to make such claims.
Perhaps it is faster than already-existing implementations, sure, but not "faster than C", and it is odd to make such claims.
What do you mean by that?
There is plenty of hand-rolled assembly in low-level libraries, whether you look at OpenBLAS (17%), GMP (36%), BoringSSL (25%), WolfSSL (14%) -- all of these numbers are based on looking at Github's language breakdown (which is measured on a per-file basis, so doesn't count inline asm or heavy use of intrinsics).
There are contexts where you want better performance guarantees than the compiler will give you. If you're dealing with cryptography, you probably want to guard against timing attacks via constant-time code. If you're dealing with math, maybe you really do want to eke out as much performance as possible, autovectorization just isn't doing what you want it to do, and your intrinsic-based code just isn't using all your registers as efficiently as you'd like.
The answer is that its more ergonomic and easier to reason about. So while you can TECHNICALLY have "algebraic data types" in C i.e. "its just a tagged union so whats the big deal?" I prefer to use them in Rust than C, for whatever unknown reason...
I also don't want to spend my brain cells thinking about pointer provenance and which void* aliases with each other. I would rather spend it on something else, thank you very much.