←back to thread

452 points birdculture | 1 comments | | HN request time: 0.202s | source
Show context
sesm ◴[] No.43979679[source]
Is there a concise document that explains major decisions behind Rust language design for those who know C++? Not a newbie tutorial, just straight to the point: why in-place mutability instead of other options, why encourage stack allocation, what problems with C++ does it solve and at what cost, etc.
replies(5): >>43979717 #>>43979806 #>>43980063 #>>43982558 #>>43984758 #
jandrewrogers ◴[] No.43980063[source]
Rust has better defaults for types than C++, largely because the C++ defaults came from C. Rust is more ergonomic in this regard. If you designed C++ today, it would likely adopt many of these defaults.

However, for high-performance systems software specifically, objects often have intrinsically ambiguous ownership and lifetimes that are only resolvable at runtime. Rust has a pretty rigid view of such things. In these cases C++ is much more ergonomic because objects with these properties are essentially outside the Rust model.

In my own mental model, Rust is what Java maybe should have been. It makes too many compromises for low-level systems code such that it has poor ergonomics for that use case.

replies(3): >>43980292 #>>43980421 #>>43981221 #
Const-me ◴[] No.43980421[source]
Interestingly, CPU-bound high-performance systems are also incompatible with Rust’s model. Ownership for them is unambiguous, but Rust has another issue, doesn’t support multiple writeable references of the same memory accessed by multiple CPU cores in parallel.

A trivial example is multiplication of large square matrices. An implementation needs to leverage all available CPU cores, and a traditional way to do that you’ll find in many BLAS libraries – compute different tiles of the output matrix on different CPU cores. A tile is not a continuous slice of memory, it’s a rectangular segment of a dense 2D array. Storing different tiles of the same matrix in parallel is trivial in C++, very hard in Rust.

replies(2): >>43981043 #>>43982067 #
winrid ◴[] No.43981043[source]
Hard in safe rust. you can just use unsafe in that one area and still benefit in most of your application from safe rust.
replies(1): >>43981264 #
Const-me ◴[] No.43981264[source]
I don’t use C++ for most of my applications. I only use C++ to build DLLs which implement CPU-bound performance sensitive numeric stuff, and sometimes to consume C++ APIs and third-party libraries.

Most of my applications are written in C#.

C# provides memory safety guarantees very comparable to Rust, other safety guarantees are better (an example is compiler option to convert integer overflows into runtime exceptions), is a higher level language, great and feature-rich standard library, even large projects compile in a few seconds, usable async IO, good quality GUI frameworks… Replacing C# with Rust would not be a benefit.

replies(3): >>43981870 #>>43983490 #>>43986327 #
dwattttt ◴[] No.43983490[source]
It does sound like quite a similar model; unsafe Rust in self contained regions, safe in the majority of areas.

FWIW in the case where you're not separating code via a dynamic library boundary, you give the compiler an opportunity to optimise across those unsafe usages, e.g. inlining opportunities for the unsafe code into callers.

replies(1): >>43987535 #
Const-me ◴[] No.43987535[source]
> quite a similar model

Yeah, and that model is rather old: https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule In practice, complex software systems have been written in multiple languages for decades. The requirements of performance-critical low-level components and high-level logic are too different and they are in conflict.

> you give the compiler an opportunity to optimise across those unsafe usages

One workaround is better design of the DLL API. Instead of implementing performance-critical outer layers in C#, do so on the C++ side of the interop, possibly injecting C# dependencies via function pointers or an abstract interface.

Another option is to re-implement these smaller functions in C#. Modern .NET runtime is not terribly slow; it even supports SIMD intrinsics. You are unlikely to match the performance of an optimised C++ release build with LTO, but it’s unlikely to fall significantly short.

replies(2): >>43987903 #>>43990873 #
1. dwattttt ◴[] No.43990873[source]
> injecting C# dependencies via function pointers or an abstract interface

This is the opposite of what I was suggesting though; those function pointers or abstract interfaces inhibit the kind of optimisations I was suggesting (e.g. inlining causing dead code removal of bounds checks, or inlining comparison functions into sort implementations, classics).

EDIT: that said, it's definitely still possible to not let it impact performance, it just takes being somewhat careful when making the interface, which you don't have to think about if it's all the same compiler/link step