Why would I pair-program with someone who doesn’t understand doubly-linked lists?
It is doable, just not as easy as in other languages because a production-grade linked-list is unsafe because Rust's ownership model fundamentally conflicts with the doubly-linked structure. Each node in a doubly-linked list needs to point to both its next and previous nodes, but Rust's ownership rules don't easily allow for multiple owners of the same data or circular references.
You can implement one in safe Rust using Rc<RefCell<Node>> (reference counting with interior mutability), but that adds runtime overhead and isn't as performant. Or you can use raw pointers with unsafe code, which is what most production implementations do, including the standard library's LinkedList.
Almost 90% of the Rust I write these days is async. I avoid non-async / blocking libraries where possible.
I think this whole issue is overblown.
Trying to construct permanent data structures using non-owning references is a very common novice mistake in Rust. It's similar to how users coming from GC languages may expect pointers to local variables to stay valid forever, even after leaving the scope/function.
Just like in C you need to know when malloc is necessary, in Rust you need to know when self-contained/owning types are necessary.
What's changed since 2015 is that we ironed out some of the wrinkles in the language (non-lexical lifetimes, async) but the fundamental mental model shift required to think in terms of ownership is still a hurdle that trips up newcomers.
A good way to get people comfortable with the semantics of the language before the borrow checker is to encourage them to clone() strings and structs for a bit, even if the resulting code is not performant.
Once they dip their toes into threading and async, Arc<Lock<T>> is their friend, and interior mutability gives them some fun distractions while they absorb the more difficult concepts.
[0]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
When it came time for me to undo all the async-trait library hack stuff I wrote after the feature landed in stable, I realized I wasn't really held back by not having it.
A flat learning curve means you never learn anything :-\
Most explanations of ownership in Rust are far too wordy. See [1]. The core concepts are mostly there, but hidden under all the examples.
- Each data object in Rust has exactly one owner.
- Ownership can be transferred in ways that preserve the one-owner rule.
- If you need multiple ownership, the real owner has to be a reference-counted cell.
Those cells can be cloned (duplicated.)
- If the owner goes away, so do the things it owns.
- You can borrow access to a data object using a reference.
- There's a big distinction between owning and referencing.
- References can be passed around and stored, but cannot outlive the object.
(That would be a "dangling pointer" error).
- This is strictly enforced at compile time by the borrow checker.
That explains the model. Once that's understood, all the details can be tied back to those rules.[1] https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...
I've discussed this with some of the Rust devs. The trouble is traits. You'd need to know if a trait function could borrow one of its parameters, or something referenced by one of its parameters. This requires analysis that can't be done until after generics have been expanded. Or a lot more attributes on trait parameters. This is a lot of heavy machinery to solve a minor problem.
Bonus: do it with no heap allocation. This actually makes it easier because you basically don’t deal with lifetimes. You just have a state object that you pass to your input system, then your guest cpu system, then your renderer, and repeat.
And I mean… look just how incredibly well a match expression works for opcode handling: https://github.com/ablakey/chip8/blob/15ce094a1d9de314862abb...
My second (and final) rust project was a gameboy emulator that basically worked the same way.
But one of the best things about learning by writing an emulator is that there’s enough repetition you begin looking for abstractions and learn about macros and such, all out of self discovery and necessity.
But if you come from Javascript or Python or Go, where all this is automated, it's very strange.
Languages I liked, I liked immediately. I didn’t need to climb a mountain first.
It has a built in coach: the borrow checker!
Borrow checker wouldn't get off my damn case - errors after errors - so I gave in. I allowed it to teach me - compile error by compile error - the proper way to do a threadsafe shared-memory ringbuffer. I was convinced I knew. I didn't. C and C++ lack ownership semantics so their compilers can't coach you.
Everyone should learn Rust. You never know what you'll discover about yourself.
I’m not calling this the pinnacle of async design, but it’s extremely familiar and is pretty good now. I also prefer to write as much async as possible.
In point of fact, I think the intended chart of the idiom is effort (y axis) to reach a given degree of mastery (x axis)
It's an abstraction and convenience to avoid fiddling with registers and memory and that at the lowest level.
Everyone might enjoy their computation platform of their choice in their own way. No need to require one way nor another. You might feel all fired up about a particular high level language that you think abstracts and deploys in a way you think is right. Not everyone does.
You don't need a programming language to discover yourself. If you become fixated on a particular language or paradigm then there is a good chance you have lost sight of how to deal with what needs dealing with.
You are simply stroking your tools, instead of using them properly.
All the jargon definitely distracted me from grasping that simple core concept.
- another think coming -> another thing coming
- couldn't care less -> could care less
- the proof of the pudding is in the eating -> the proof is in the pudding
It's usually not useful to try to determine the meaning of the phrases on the right because they don't have any. What does it mean for proof to be in a pudding for example?
The idiom itself is fine, it's just a black box that compares learning something hard to climbing a mountain. But learning curves are real things that are still used daily so I just thought it was funny to talk as if a flat one was desirable.
I very rarely have to care about future pinning, mostly just to call the pin macro when working with streams sometimes.
Side note: Stack allocation is faster to execute as there's a higher probability of it being cached.
Here is a free book for a C++ to Rust explanation. https://vnduongthanhtung.gitbooks.io/migrate-from-c-to-rust/...
An example: parsing a cookie header to get cookie names and values.
In that case, I settled on storing indexes indicating the ranges of each key and value instead of string slices, but it’s obviously a bit more error prone and hard to read. Benchmarking showed this to be almost twice as fast as cloning the values out into owned strings, so it was worth it, given it is in a hot path.
I do wish it were easier though. I know there are ways around this with Pin, but it’s very confusing IMO, and still you have to work with pointers rather than just having a &str.
Note I’m not being critical of the author here. I think it’s lovely to turn your passion into trying to help others learn.
The compiler knows the returned reference must be tied to one of the incoming references (since you cannot return a reference to something created within the function, and all inputs are references, the output must therefore be referencing the input). But the compiler can’t know which reference the result comes from unless you tell it.
Theoretically it could tell by introspecting the function body, but the compiler only works on signatures, so the annotation must be added to the function signature to let it determine the expected lifetime of the returned reference.
Rust also has the “single mutable reference” rule. If you have a mutable reference to a variable, you can be sure nobody else has one at the same time. (And the value itself won’t be mutated).
Mechanically, every variable can be in one of 3 modes:
1. Directly editable (x = 5)
2. Have a single mutable reference (let y = &mut x)
3. Have an arbitrary number of immutable references (let y = &x; let z = &x).
The compiler can always tell which mode any particular variable is in, so it can prove you aren’t violating this constraint.
If you think in terms of C, the “single mutable reference” rule is rust’s way to make sure it can slap noalias on every variable in your program.
This is something that would be great to see in rust IDEs. Wherever my cursor is, it’d be nice to color code all variables in scope based on what mode they’re in at that point in time.
However, for high-performance systems software specifically, objects often have intrinsically ambiguous ownership and lifetimes that are only resolvable at runtime. Rust has a pretty rigid view of such things. In these cases C++ is much more ergonomic because objects with these properties are essentially outside the Rust model.
In my own mental model, Rust is what Java maybe should have been. It makes too many compromises for low-level systems code such that it has poor ergonomics for that use case.
Why RAII then?
> C++ to Rust explanation
I've seen this one. It is very newbie oriented, filled with trivial examples and doesn't even have Rust refs to C++ smart pointers comparison table.
My gut feeling says that there's a fair bit of Stockholm Syndrome involved in the attachments people form with Rust.
You could see similar behavioral issues with C++ back in the days, but Rust takes it to another level.
I don't know about C#, but at least in Rust, one reason is that normal (non-async) functions have the property that they will run until they return, they panic, or the program terminates. I.e. once you enter a function it will run to completion unless it runs "forever" or something unusual happens. This is not the case with async functions -- the code calling the async function can just drop the future it corresponds to, causing it to disappear into the ether and never be polled again.
In practice, it really doesn't. The difficulty of implementing doubly linked lists has not stopped people from productively writing millions of lines of Rust in the real world. Most programmers spend less than 0.1% of their time reimplementing linked data structures; rust is pretty useful for the other 99.9%.
>Why RAII then?
Their quote is probably better rephrased as _being explicit and making the programmer make decisions when the compiler's decision might impact safety_
Implicit conversion between primitives may impact the safety of your application. Implicit memory management and initialization is something the compiler can do safely and is central to Rust's safety story.
At that point you might as well be writing Java or Go or whatever though. GC runtimes tend actually to be significantly faster for this kind of code, since they can avoid all those copies by sharing the underlying resource. By the same logic, you can always refactor the performance-critical stuff via your FFI of choice.
I think it is a very good example of why "design by committee" is good. The "Rust Committee" has done a fantastic job
Thank you
They say a camel is a horse designed by a committee (https://en.wiktionary.org/wiki/a_camel_is_a_horse_designed_b...)
Yes:
* Goes twice as far as a horse
* On half the food and a quarter the water of a horse
* Carries twice as much as a horse
Yes, I like design by committee. I have been on some very good, and some very bad committees, but there is nothing like the power of a good committee
Thank you Rust!
Frankly most of the complexity you're complaining about stems from attempts to specify exactly what magic the borrow checker can prove correct and which incantations it can't.
Yes the borrow checker is central to Rust, but there are other features to the language that people _also_ need to learn and explore to be productive. Some of these features may attract them to Rust (like pattern matching / traits / etc.)
Stop!
If you are using a doubly linked list you (probably) do not have to, or want to.
There is almost no case where you need to traverse a list in both directions (do you want a tree?)
A doubly linked list wastes memory with the back links that you do not need.
A singly linked list is trivial to reason about: There is this node and the rest. A doubly linked list more than doubles that cognitive load.
Think! Spend time carefully reasoning about the data structures you are using. You will not need that complicated, wasteful, doubly linked list
I think that it's happened to some degree for almost every computer programming language for a whiles now - first was the C guys enamoured with their NOT Pascal/Fortran/ASM, then came the C++ guys, then Java, Perl, PHP, Python, Ruby, Javascript/Node, Go, and now Rust.
The vibe coding people seem to be the ones that are usurping Rust's fan boi noise at the moment - every other blog is telling people how great the tool is, or how terrible it is.
What is the evidence for this? Plenty of high-performance systems software (browsers, kernels, web servers, you name it) has been written in Rust. Also Rust does support runtime borrow-checking with Rc<RefCell<_>>. It's just less ergonomic than references, but it works just fine.
One reason why async-await is trivial in .NET is garbage collector. C# rewrites async functions into a state machine, typically heap allocated. Garbage collector automagically manages lifetimes of method arguments and local variables. When awaiting async functions from other async functions, the runtime does that for multiple async frames at once but it’s fine with that, just a normal object graph. Another reason, the runtime support for all that stuff is integrated into the language, standard library, and most other parts of the ecosystem.
Rust is very different. Concurrency runtime is not part of the language, the standard library defined bare minimum, essentially just the APIs. The concurrency runtime is implemented by “Tokio” external library. Rust doesn’t have a GC; instead, it has a borrow checker who insists on exactly one owner of every object at all times, makes all memory allocations explicit, and exposed all these details to programmer in the type system.
These factors make async Rust even harder to use than normal Rust.
Whether it's more efficient to carry a second pointer around when manipulating the list, or store a second pointer in every list node (aka double linked list) is up to your problem space.
Or whether an O(n) removal is acceptable.
Linked lists are perfect for inserting/deleting nodes, as long as you never need to traverse the list or access any specific node.
I don’t specifically like Rust itself. And one doesn’t need a programming language to discover themselves.
My experience learning Rust has been that it imposes enough constraints to teach me important lessons about correctness. Lots of people can learn more about correctness!
I’ll concede- “everyone” was too strong; I erred on the side of overly provocative.
A trivial example is multiplication of large square matrices. An implementation needs to leverage all available CPU cores, and a traditional way to do that you’ll find in many BLAS libraries – compute different tiles of the output matrix on different CPU cores. A tile is not a continuous slice of memory, it’s a rectangular segment of a dense 2D array. Storing different tiles of the same matrix in parallel is trivial in C++, very hard in Rust.
In my experience, hobbyist Rust projects end up using unwrap and panic all over the place, and it’s a giant mess that nobody will ever refactor.
C++ does that too with RAII. Go ahead and use whatever STL containers you like, emplace objects onto them, and everything will be safely single-owned with you never having to manually new or delete any of it.
The difference is that C++'s guarantees in this regard derive from a) a bunch of implementation magic that exists to hide the fact that those supposedly stack-allocated containers are in fact allocating heap objects behind your back, and b) you cooperating with the restrictions given in the API docs, agreeing not to hold pointers to the member objects or do weird things with casting. You can use scoped_ptr/unique_ptr but the whole time you'll be painfully aware of how it's been bolted onto the language later and whenever you want you can call get() on it for the "raw" underlying pointer and use it to shoot yourself in the foot.
Rust formalizes this protection and puts it into the compiler so that you're prevented from doing it "wrong".
Would rather have that than all the issues that JavaScript or any other weakly typed and dynamically typed language.
After all this ordeal, I can confidently say that learning Rust was one of the best decisions I’ve made in my programming career. Declaring types, structs, and enums beforehand, then writing functions to work with immutable data and pattern matching, has become the approach I apply even when coding in other languages.
I'm trying to phrase this as delicately as I can but I am really puzzled.
If someone wrote an article about how playing the harp is difficult, just stick with it... would you also say that playing the harp is a terrible hobby?
I find it relatively simple. Much simpler than C++ (obviously). For someone who can write C++ and has some experience wth OCaml/Haskell/F#, it's not a hard language.
The near impossibility of building a competitive high-performance I/O scheduler in safe Rust is almost a trope at this point in serious performance-engineering circles.
To be clear, C++ is not exactly comfortable with this either but it acknowledges that these cases exist and provides tools to manage it. Rust, not so much.
Before Rust I was hearing the same argument from Haskell or Scala developers trying to justify their language of choice.
I know Rust is here to stay, but I think it’s mostly because it has a viable ecosystem and quality developer tools. Its popularity is _in spite of_ many of its language features that trade that extra 1% of safety for 90% extra learning curve.
Cloning small objects is lightning fast, turns out in a lot of these cases it makes sense to just do the clone, especially when it's a first pass. The nice thing is that at least rust makes you explicitly clone() so you're aware when it's happening, vs other languages where it's easy to lose track of what is and isn't costing you memory. So you can see that it's happening, you can reason about it, and once the bones of the algorithm are in place, you can say "okay, yes, this is what should ultimately own this data, and here's the path it's going to take to get there, and these other usages will be references or clones.
“The common English usage aligns with a metaphorical interpretation of the learning curve as a hill to climb.”
Followed by a graph plotting x “experience” against y “learning.”
It's really not, it's the way python works. Heap allocations are "fast" on modern CPUs that are too fast to measure for most stuff, but they're much (much) slower than the function call and code you're going to use to operate on whatever the thing it was you cloned.
Code that needs memory safety and can handle performance requirements like this has many options for source language, almost none of which require blog posts to "flatten the learning curve".
(And to repeat: it's much slower than a GC which doesn't have to make the clone at all. Writing Rust that is "Slower Than Java" is IMHO completely missing the point. Java is boring as dirt, but super easy!)
Complex is the wrong word. Baffling is a better word. Or counterintuitive, or cumbersome. If “easy enough for someone with experience in C++, OCaml, Haskell, and F#” were the same thing as “not hard” then I don’t think this debate would come up so frequently.
I know this feels like a positive vibe post and I don’t want to yuck anyone’s yum, but speaking for myself when someone tells me “everyone should” do anything, alarm bells sound off in my mind, especially when it comes to programming languages.
If you’re going to write an emulator in this style, why even use an imperative language when something like Haskell is designed for this sort of thing?
Note that this is an intentional choice rather than a limitation, because if the compiler analyzed the function body to determine lifetimes of parameters and return values, then changing the body of a function could be a non-obvious breaking API change. If lifetimes are only dependent on the signature, then its explicit what promises you are or are not making to callers of a function about object lifetimes, and changing those promises must be done intentionally by changing the signature rather than implicitly.
why
That `longest` if defined without explicit lifetimes treated like a lifetime of a return value is the same as of the first argument. It is a rule "lifetime elision", which allows to not write lifetimes explicitly in most cases.
But `longest` can return a second reference also. With added lifetimes the header of the function says exactly that: the lifetime of a return value is a minimum of lifetimes of arguments. Not the lifetime of the first one.
This. Many trival changes breaks API. This is not ideal for library developers.
You can argue it is broken already, but this is forcing the breakage onto every api caller, not just some broken caller.
I'm not acquainted with Rust, so I don't really know, but I wonder if the wording plays a role in the difficulty of concept acquisition here. Analogies are often double edged tools.
Maybe sticking to a more straight memory related vocabulary as an alternative presentation perspective might help?
That doesn't mean you should though. Imagine how much energy is being wasted globally on bad Python code... The difference is of course that anyone can write it, and not everyone can write Rust. I'm not personally a big fan of Rust, I'd chose Zig any day of the week... but then I'd also choose C over C++, and I frankly do when I optimise Python code that falls in those last 5%. From that perspective... of someone who really has to understand how Python works under the hood and when to do what, I'd argue that Rust is a much easier langauge to learn with a lot less "design smell". I suppose Python isn't the greatest example as even those of us who love it know that it's a horrible language. But I think it has quite clearly become the language of "everyone" and even more so in the age of LLM. Since our AI friends will not write optimised Python unless you specifically tell them to use things like generators and where to use them, and since you (not you personally) won't because you've never heard about a generator before, then our AI overlords won't actually help.
It has one: use raw pointers and unsafe. People are way too afraid of unsafe, it's there specifically to be used when needed.
From that angle, it indeed doesn’t seem to make sense.
I think, but might be completely wrong, that viewing these actions from their usual meaning is more helpful: you own a toy, it’s yours to do as tou please. You borrow a toy, it’s not yours, you can’t do whatever you want with it, so you can’t hold on to it if the owner doesn’t allow it, and you can’t modify it for the same reasons.
Thankfully C# has mostly catched up with those languages, as the other language I enjoy using.
After that, is the usual human factor on programming languages adoption.
Most of my applications are written in C#.
C# provides memory safety guarantees very comparable to Rust, other safety guarantees are better (an example is compiler option to convert integer overflows into runtime exceptions), is a higher level language, great and feature-rich standard library, even large projects compile in a few seconds, usable async IO, good quality GUI frameworks… Replacing C# with Rust would not be a benefit.
I have a hard time understanding why people have such a hard time accepting that you need to convert between different text representations when it's perfectly accepted for numbers.
Ownership is easy, borrowing is easy, what makes the language super hard to learn is that functions must have signatures and uses that together prove that references don't outlive the object.
Also: it's better not store referenced object in a type unless it's really really needed as it makes the proof much much more complex.
It's really too bad rust went the RAII route.
The problem with articles like this is that they don't really get to the heart of the problem:
There are programs that Rust will simply not let you write.
Rust has good reasons for this. However, this is fundamentally different from practically every programming language that people have likely used before where you can write the most egregious glop and get it to compile and sometimes even kinda-sorta run. You, as a programmer, have to make peace with not being able to write certain types of programs, or Rust is not your huckleberry.
I started to learn Rust, but I was put off by the heavy restrictions the language imposes and the attitude that this is the only safe way. There's a lack of acknowledgement, at least in beginner materials, that by choosing to write safe Rust you're sacrificing many perfectly good patterns that the compiler can't understand in exchange for safety. Eventually I decided to stop because I didn't like that tradeoff (and I didn't need it for my job or anything)
I remember both MS and goog having talks about real-world safety issues in the range of 50% of cases were caused by things that safe rust doesn't allow (use after free, dangling pointers, double free, etc). The fact that even goog uses it, while also developing go (another great language with great practical applications) is telling imo.
Unfortunately going from most languages to Rust forces you to speedrun this transition.
- it's very different from other languages. That's intentional but also an obstacle.
- it's a very complex language with a very terse syntax that looks like people are typing with their elbows and are hitting random keys. A single character can completely change the meaning of a thing. And it doesn't help that a lot of this syntax deeply nested.
- a lot of its features are hard to understand without deeper understanding of the theory behind them. This adds to the complexity. The type system and the borrowing mechanism are good examples. Unless you are a type system nerd a lot of that is just gobblygook to the average Python or Javascript user. This also makes it a very inappropriate language for people that don't have a master degree in computer science. Which these days is most programmers.
- it has widely used macros that obfuscate a lot of things that further adds to the complexity. If you don't know the macro definitions, it just becomes harder to understand what is going on. All languages with macros suffer from this to some degree.
I think LLMs can help a lot here these days. When I last tried to wrap my head around Rust that wasn't an option yet. I might have another go at it at some time. But it's not a priority for me currently. But llms have definitely lowered the barrier for me to try new stuff. I definitely see the value of a language like users. But it doesn't really solve a problem I have with the languages I do use (kotlin, python, typescript, etc.). I've used most popular languages at some point in my life. Rust is unique in how difficult it is to learn.
Secondary, Mojo's lifetime does not tell the compiler when a value is safe to use but when it is safe to delete, in this way the lifetime is not scope based, references will extend the lifetime of the value they reference, but values will be destroyed immediately after their last use. In Mojo you'll never see "value does not live long enough".
Just these two design decisions defines away so many ergonomic issues.
1. In real life I can borrow a toy from you and while I have that toy in my hands, the owner can exchange ownership with somebody else, while the object is borrowed by me. I.e. in real life the borrowing is orthogonal to ownership. In rust you can't do that.
2. Borrowing a toy is more akin to how mutable references work in rust. Immutable references allow multiple people to play with the same toy simultaneously, provided they don't change it.
Analogies are just analogies
I mean, you can't expect to learn a new language in a few days, it'll always take a bit of work. My feeling is that people complaining of the language being hard aren't putting the effort.
My experience is that Rust is a relatively small language which doesn't introduce a lot of new concepts. The syntax is quite intuitive, and the compiler super helpful. The borrower checker was the only new thing for me. I'm not an expert at all, but my experience is that after spending 2 weeks full-time reading books and experimenting, I was able to work professionally with the language without feeling too much friction.
On the other hand, after spending much more time on C++, I don't feel really comfortable with the language.
I thought the Rust Book was too verbose but I liked Comprehensive Rust: https://google.github.io/comprehensive-rust/
I felt like I understood the stuff in the book based on cursory reading, but I haven't tried to actually use it.
Personally, I’ve been using to_owned instead. Some of the people looking at my code don’t write rust, and I figure it makes things a bit easier to understand.
The most complicated aspect of the borrows comes about from the elision rules which will silently do the wrong thing and will work fantastically until they don't at which point the compiler error is pointing at a function complaining about a lifetime parameter of a parameter with the trait method implying that the parameter has to live too long but the real problem was a lifetime in the underlying struct or a previous broken lifetime bound. Those elision rules are again not-intuitive and don't fall out of your explanation axiomatically. They were decisions that were made to attempt to simplify the life of programmers.
"Significantly" and "this kind" are load bearing sentences here. In applications where predictable latency is desired, cloning is better than GC.
This is also the baby steps of learning the language. As a programmer gets better they will recognize when they are making superflous clones. Refactoring performance-critical stuff in FFI, however, is painful and wont get easier with time.
Furthermore, in real applications, this only really applies to Strings and vectors. In most of my applications most `clones` are of reference types - which is only marginally more expensive than memory sharing under a GC.
Many people will think, I have a garbage collected language, rust has nothing to teach me. Even in garbage collected languages, people create immutable types because the possibility of shared references with mutability makes things incredibly chaotic that they look for immutability as a sort panacea. However, once you have immutable types you quickly realize that you also need ergonomic ways of modifying those objects, the methods you create to do so are often more cumbersome than what would be permitted for a mutable object. You wish there was some way to express, "There is a time where this object is mutable and then it becomes immutable." Enter the borrow checker.
Once you are borrow checking... why are you garbage collecting? Well, expressing those timelines of mutability and existence is a cost because you need to understand the timeline and most people would rather not spend that energy--maybe mutability or the poor ergonomics of immutable objects wasn't so bad. So, I garbage collect because I do not want to understand the lifetimes of my objects. Not understanding the lifetimes of objects is what makes shared mutability hard. Immutability eliminates that problem without requiring me to understand. Rust can teach this lesson to you so that you make an informed choice.
Of course, you can also just listen to me and learn the same lesson but there is value for many people to experience it.
Coloring just exacerbates the issues because it's viral, not because coloring itself is an issue.
And miss Option, Result, proper enums, powerful pattern matching, exhaustive pattern matching, affine types, traits, doctests... and the many other QoL features that I sorely miss when I drop to e.g. TS/Node.
I'm not using Rust for the borrow checker, but it's nice to have when I need it to hold my hand and not that much of an issue when I don't. I wanted to like Go but I just can't.
Dropping to no_std though... that was a traumatic experience.
Instead, I would argue that rust is favoring a form of explicitness together with correctness. You have to clean up that resource. I have seen arguments that you should be allowed to leak resources, and I am sympathetic, but if we agree on explicitness as a goal then perhaps you might understand the perspective that a leak should be explicit and not implicit in a the lack of a call a some method. Since linear types are difficult to implement auto-drops are easier if you favor easily doing the correct thing. If you want to leak your resource, stash it in some leak list or unsafe erase it. That is the thing that should be explicit: the unusual choice, not all choices and not the usual choice alone.
But yeah, the drop being implicit in the explicit initialization does lead to developers ignoring it just like a leak being implicit if you forget to call a function often leads to unintentionally buggy programs. So when a function call ends they won't realize that a large number of objects are about to get dropped.
To answer your original question, the rationale is not in one concise location but is spread throughout the various RFCs that lead to the language features.
Because cloning as opposed to copying is expensive and it generates a new instance of a type. In C, you don't clone, you simply copy the struct or pointer, which will lead to a pointer to the same memory or a struct with members pointing to the same memory.
C++ on the other hand has a copy constructor, and you have to move explicitly, often generating unnecessary copies (in the sense of clone)
> Mojo's lifetime does not tell the compiler when a value is safe to use but when it is safe to delete,
What happens if you pass the variable mutably to a function?
For your concrete example of subdividing matrixes, that seems like it should be fairly straightforward in Rust too, if you convert your mutable reference to the data into a pointer, wrap your pointer arithmetic shenanigans in an unsafe block and add a comment at the top saying more or less "this is safe because the different subprograms are always operating on disjoint subsets of the data, and therefore no mutable aliasing can occur"?
Python is my favourite, C is elegance in simplicity and Go is tolerable.
And what is that? Its easy to fall in the trap of making explanations that is very good (if you already understand).
Is knowing C++ a pre-requisite?
What happens in what manner? Mojo uses ASAP memory model, values will always be destroyed at the point of its last use. Mojo dataflow analysis will track this.
In terms of safety, Mojo will enforce `alias xor mutability` - like in Rust.
> C++ on the other hand has a copy constructor, and you have to move explicitly, often generating unnecessary copies (in the sense of clone)
Mojo also has copy and move constructors, but unlike in C++ these are not synthesised by default; the type creator has to either explicitly define the constructors or add a synthesiser. In Mojo, you can have types that are not copyable and not movable, these types can only be passed by reference. You can also have types that are copyable but not movable, or movable but not copyable.
Historically, programmers drastically overestimate their ability to write perfectly safe code, so it's an enormous benefit if the compiler is able to understand whether it's actually safe.
If it isn't the always hated SecDevOps group of people pushing for the security tooling developers don't care about, at very least on build pipelines, they would keep collecting digital dust.
C has a simple syntax, but it is most certainly not elegant.
Variable in rust is not a label you can pass around and reuse freely. It's a fixed size physical memory that values can be moved into or moved out of. Once you understand that everything makes sense. The move semantics, cloning, borrowing, Sized, impl. Every language design element of rust is a direct consequence of that. It's the values that get created, destroyed and moved around and variables are actual first-class places to keep them with their own identity separate from values that occupy them. It's hard to notice this because Rust does a lot to pretend it's a "normal" language to draw people in. But for anyone with experience in programming that attempts to learn Rust I think this realization could make the process at least few times easier.
It's hard to shift to this new paradigm and embrace it, so in the meantime feel use a lot of Rc<> and cloning if you just need to bang out some programs like you would in any other mainstream language.
On the opposite, "Unsafe" Rust is not simple at all, but without it, we can't write many programs. It's comparable to C, maybe even worse in some ways. It's easy to break rules (aliasing for exmaple). Raw pointer manipulation is less ergonomic than in C, C++, Zig, or Go. But raw pointers are one of the most important concepts in CS. This part is very important for learning; we can't just close our eyes to it.
And I'm not even talking about Rust's open problems, such as: thread_local (still questionable), custom allocators (still nightly), Polonius (nightly, hope it succeeds), panic handling (not acceptable in kernel-level code), and "pin", which seems like a workaround (hack) for async and self-referential issues caused by a lack of proper language design early on — many learners struggle with it.
Rust is a good language, no doubt. But it feels like a temporary step. The learning curve heavily depends on the kind of task you're trying to solve. Some things are super easy and straightforward, while others are very hard, and the eventual solutions are not as simple, intuitive or understandable compared to, for example, C++, C, Zig, etc.
Languages like Mojo, Carbon (I hope it succeeds), and maybe Zig (not sure yet) are learning from Rust and other languages. One of them might become the next major general-purpose systems language for the coming decades with a much more pleasant learning curve.
D example, https://godbolt.org/z/bbfbeb19a
> Error: returning `& my_value` escapes a reference to local variable `my_value`
C# example, https://godbolt.org/z/Y8MfYMMrT
> error CS8168: Cannot return local 'num' by reference because it is not a ref local
C++ is just Rust without any attempt at tracking variable access and cloning which leads to a mess because people are too terrible at that to do that manually and ad-hoc. So Rust fixes that.
This is all despite a long career as a programmer. Seems like some things just take repetition.
The "Dagger" dependency injection framework for the JVM took me 3 'learning attempts' to understand as well. May say more about myself than about learning something somewhat complicated.
> "1" + 2
3
And it's utter madness that everyone does anything important with languages like that.Similarly here, I can't understand for example _who_ is the owner. Is it a stack frame? Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first, so there is no danger in hanging to it until callee returns. Is it for optimization, so that we can get rid of the object sooner? Could owner be something else than a stack frame? Why can mutable reference be only handed out once? If I'm only using a single thread, one function is guaranteed to finish before the other starts, so what is the harm in handing mutable references to both? Just slap my hands when I'm actually using multiple threads.
Of course, there are reasons for all of these things and they probably are not even that hard to understand. Somehow, every time I want to get into Rust I start chasing these things and give up a bit later.
The heap is but one source for allocator-backed memory. I've used pieces of stack for this, too. One could also use an entirely staticly sized and allocated array.
The second part of your statement is very debatable based on what safe means in this case, and whether it's an enormous benefit for a given situation.
There's plenty of stories [0][1] about Rust getting in the way and being very innappropriate for certain tasks and goals, and those "enormous benefits" can become "enormous roadblocks" in different perspectives and use cases.
In my personal and very subjective opinion I think Rust can be very good when applied to security applications, realtime with critical safety requirements (in some embedded scenarios for example), that sort of stuff. I think it really gets in the way too much in other scenarios with demanding rules and pattern that prevent from experimenting easily and exploring solutions quickly.
[0]https://barretts.club/posts/rust-for-the-engine/ [1]https://loglog.games/blog/leaving-rust-gamedev/
Can you specify a few of these programs?
I can see where Rust might not allow you to write something the way you want to, but I fail to see how a program would not be expressible in rust...
Perhaps you do software engineering in a given language/framework?
A clutch is fundamental to automotive engineering even if you don’t use one daily.
- A sequence of arbitrary bytes
- A sequence of non-null bytes interpreted as ASCII
- A sequence of unicode code points, in multiple possible encodings
> Accept that learning Rust requires...
> Leave your hubris at home
> Declare defeat
> Resistance is futile. The longer you refuse to learn, the longer you will suffer
> Forget what you think you knew...
Now it finally clicked to me that Orwell's telescreen OS was written in Rust
What that means is for example, if you have high aesthetical ideals and try to write object oriented code you will hit a brick wall eventually. Why? Notnbecause Ruwt is a bad language, but because you try to write Rust like it is Java or something.
Rust is a very nice language if you respect that there are Rust-ways of doing things that and that these ways are more data oriented than you might be used to.
The strictness can be daunting for beginners, but with increasing complexity it becones an absolute godsend. Where in other languages I find errors only when they happen, most Rust code just works (provided you write it in a Rust way), because the errors will caught during compilation.
That doesn't prevent logic errors, but these can be addressed with the absolute stellar test integrations. Now Rust is not all roses, but it is certainly a language worth learning even if you never use it. The ways it mitigates certain classes of errors can be turned into good coding practises for other languages as well.
"You own a toy" is the first thing a child is teached as wrong assumption by reality if not by careful social education, isn't it? The reality is, "you can play with the toy in some time frame, and sharing with others is the only way we can all benefit of joyful ludic moment, while claims of indefinite exclusive use of the toy despite limited attention span that an individual can spend on it is socially detrimental."
Also memory as an abstract object pragmatically operate on very different ground than a toy. If we could duplicate any human hand grabbable object as information carried by memory holding object, then any economy would virtually be a waste of human attention.
¹ edit: actually I was wrong here, I have been in confusion with "fiduciary". Finance instead comes from french "fin"(end), as in "end of debt".
You would think they would be smart enough to realize that a language taking X hours to learn is a language flaw not a user flaw, but modern education focuses on specialization talents rather than general intelligence.
Here's a single-threaded program which would exhibit dangling pointers if Rust allowed handing out multiple references (mutable or otherwise) to data that's being mutated:
let mut v = Vec::new();
v.push(42);
// Address of first element: 0x6533c883fb10
println!("{:p}", &v[0]);
// Put something after v on the heap
// so it can't be grown in-place
let v2 = v.clone();
v.push(43);
v.push(44);
v.push(45);
// Exceed capacity and trigger reallocation
v.push(46);
// New address of first element: 0x6533c883fb50
println!("{:p}", &v[0]);
If it takes the average person 1 million hours to learn rust then the average person won't learn rust
If it takes the average person 1 hour to learn rust then the average person will learn rust.
If you were designing a language which would you pick all else being equal?
To your question, no but I wouldn't be puzzled when most people pick up a guitar. (Both are so much more intuitive than any programming language so the metaphor sets false expectations. Slick political move, but probably just turns more people off of Rust)
I don't think it's much harder than learning C or C++ which are the only comparable mainstream languages.
Rust isn't a language you should pick up if you're not ready to put in the work. Just like you shouldn't go for full blown automotive grade C coding if you just want to learn coding quickly to get a job or something.
Rust has a steep learning curve, but the harder part (as mentioned in the article) is to unlearn patterns from other programming languages if you think you're already a good programmer.
There’s a difference between “do it like this” and “keep an open mind when you do this”.
You can learn rust any way you want; this is just a guide for how to learn it effectively.
A fairer comparison would be learning Japanese by going to Japan and insisting on speaking English except in Japanese language classes.
Yes, you can do that.
…but it is not the most effective way to learn.
The best way to learn is full immersion. It’s just harder.
If you don’t want that, don’t do it. It’s not a cult. That’s just lazy flippant anti-rust sentiment.
Writing good software most often is not easy. The learning curve of a particular language usually is only a modest part of what it takes.
It definitely takes some getting used to, but there's absolutely times when you could want something to move ownership into a called function, and extending it would be wrong.
An example would be if it represents something you can only do once, e.g. deleting a file. Once you've done it, you don't want to be able to do it again.
The owned memory may be on a stack frame or it may be heap memory. It could even be in the memory mapped binary.
> Why would a stack frame want to move ownership to its callee
Because it wants to hand full responsibility to some other part of the program. Let's say you have allocated some memory on the heap and handed a reference to a callee then the callee returned to you. Did they free the memory? Did they hand the reference to another thread? Did they hand the reference to a library where you have no access to the code? Because the answer to those questions will determine if you are safe to continue using the reference you have. Including, but not limited to, whether you are safe to free the memory.
If you hand ownership to the callee, you simply don't care about any of that because you can't use your reference to the object after the callee returns. And the compiler enforces that. Now the callee could, in theory give you back ownership of the same memory but, if it does, you know that it didn't destroy etc that data otherwise it couldn't give it you back. And, again, the compiler is enforcing all that.
> Why can mutable reference be only handed out once?
Let's say you have 2 references to arrays of some type T and you want to copy from one array to the other. Will it do what you expect? It probably will if they are distinct but what if they overlap? memcpy has this issue and "solves" it by making overlapped copies undefined. With a single mutable reference system, it's not possible to get that scenario because, if there were 2 overlapping references, you couldn't write to either of them. And if you could write to one, then the other has to be a reference (mutable or not) to some other object.
There are also optimisation opportunities if you know 2 objects are distinct. That's why C added the restrict keyword.
> If I'm only using a single thread
If you're just knocking up small scripts or whatever then a lot of this is overkill. But if you're writing libraries, large applications, multi-dev systems etc then you may be single threaded but who's confirming that for every piece of the system at all times? People are generally really rubbish at that sort of long range thinking. That's where these more automated approaches shine.
> hide information...Why, from whom?
The main reason is that you want to expose a specific contract to the rest of the system. It may be, for example, that you have to maintain invariants eg double entry book-keeping or that the sides of a square are the same length. Alternatively, you may want to specify a high level algorithm eg matrix inversion, but want it to work for lots of varieties of matrix implementation eg sparse, square. In these cases, you want your consumer to be able to use your objects, with a standard interface, without them knowing, or caring, about the detail. In other words you're hiding the implementation detail behind the interface.
It may be impractical for some tasks but the power:complexity rate is very impressive. Lua feels similar in that regard.
This is what's stumped me when learning Rust. It could be the resources I used, whixh introduced macros early on with no explanation.
But why would you think all else is equal? You might not agree with the tradeoffs Rust makes, and it's not as if there's a perfect language for all uses, but it absolutely makes hard software easier to write.
I've had the opportunity to debug weird crazy memory corruption, as well as "wow it's hard to figure out how to design this in Rust", and having come to terms with things much like this blog post I now get more work done, with less corruption _and_ design problems.
I know it throws people off, and the compiler error can be confusing, but actual explicit lifetimes as part of a signature are less common than you'd expect.
To me it's a code smell to see a lot of them.
FWIW in the case where you're not separating code via a dynamic library boundary, you give the compiler an opportunity to optimise across those unsafe usages, e.g. inlining opportunities for the unsafe code into callers.
I also imagine it’s much faster for the type-checking pass of the compiler to just look at the signatures.
Rust's system of ownership and borrowing effectively lets you hand out "permissions" for data access. The owner gets the maximum permissions, including the ability to hand out references, which grant lesser permissions.
In some cases these permissions are useful for performance, yes. The owner has the permission to eagerly destroy something to instantly free up memory. It also has the permission to "move out" data, which allows you to avoid making unnecessary copies.
But it's useful for other reasons too. For example, threads don't follow a stack discipline; a callee is not guaranteed to terminate before the caller returns, so passing ownership of data sent to another thread is important for correctness.
And naturally, the ability to pass ownership to higher stack frames (from callee to caller) is also necessary for correctness.
In practice, people write functions that need the least permissions necessary. It's overwhelmingly common for callees to take references rather than taking ownership, because what they're doing just doesn't require ownership.
That bullet, at its most charitable, defines the "idealized goal" of the borrow collector. The actual device is much less capable (as it must be, as the goal is formally undecidable!), and "learning rust" requires understanding how.
I would say includes/standard library/compilers going crazy on UB is part of the infrastructure or ecosystem around the language and not the language itself. And I agree absolutely, while C the language is beautiful, the infra around it is atrocious.
If a "learning curve" is a simple X-Y graph with "time" and "knowledge" being on each axis respectively, then what sort of learning curve is preferable: a flatter one or a steep one?
Clearly, if you graph large increases of knowledge over shorter periods of time, a steeper learning curve is more preferable. "Flattening the learning curve" makes it worse!
But for some reason, people always reverse this meaning, and so the common idiom breaks down for people who try to reason it out.
I'd advise these people to personally figure out why they're so against compiler suggestions. Do you want to do things differently? What part stops you from doing that?
fn foo() -> String
fn bar() -> Result<String, Error>
I can't just treat `bar` the same as `foo` because it doesn't give me a String, it might have failed to give me a String. So I need to give it special handling to get a String. async fn qux() -> String
This also doesn't give me a String. It gives me a thing that can give me a String (an `impl Future<Output=String>`, to be more specific), and I need to give it special handling to get a String.All of these function have different colours, and I don't really see why it's suddenly a big issue for `qux` when it wasn't for `bar`.
I want rust to be adopted and I believe companies should force it, but you will not get adoption from young developers and even less from senior C++ developers.
Not to mention rewriting existing C++ code in rust, which cost would be astronomical, although I do believe companies should invest in rewriting things in rust because it's the right thing to do.
If you're writing purely safe code, I will say this is true in a practical sense, but you can almost always use unsafe to write whatever you think rust won't let you do.
I think here you expanded on the original point in a good way. I would then continue with adding additional set of points covering the issue in greater detail and a set of examples of where this commonly happens and how to solve it.
> The concurrency runtime is implemented by “Tokio” external library.
Scare quotes around Tokio?
You can't use Rails without Rails or Django without Django.
The reason Rust keeps this externally is because they didn't want to bake premature decisions into the language. Like PHP's eternally backwards string library functions or Python's bloated "batteries included" standard library chock full of four different XML libraries and other cruft.
> instead, it has a borrow checker who insists on exactly one owner of every object at all times, makes all memory allocations explicit, and exposed all these details to programmer in the type system
Table stakes. Everyone knows this. It isn't hard or scary, it just takes a little bit of getting used to. Like a student learning programming for the first time. It's not even that hard. Anyone can learn it.
It's funny people complain about something so easy. After you learn to ride the bike, you don't complain about learning to ride the bike anymore.
> Rust is very different.
Oh no!
Seriously this is 2025. I can write async Rust without breaking a sweat. This is all being written by people who don't touch the language.
Rust is not hard. Stop this ridiculous meme. It's quite an easy language once you sit down and learn it.
> - another think coming -> another thing coming
Fascinating. I had never come across this before. I've only ever seen people use "another thing coming".
When you create a thing, you allocate it. That thing owns it and destroys it, unless you pass that ownership onto something else (which C++ RAII doesn't do very cleanly like Rust can).
Then it does some other nice things to reduce every sharp edge it can:
- No nulls, no exceptions. Really good Option<T> and Result<T,E> that make everything explicit and ensure it gets handled. Wonderful syntactic sugar to make it easy. If you ever wondered if your function should return an error code, set an error reference, throw an exception - that's never a design consideration anymore. Rust has the very best solution in the business. And it does it with rock solid safety.
- Checks how you pass memory between threads with a couple of traits (Send, Sync). If your types don't implement those (usually with atomics and locks), then your code won't pass the complier checks. So multithreaded code becomes provably safe at compile time to a large degree. It won't stop you from deadlocking if you do something silly, but it'll solve 99% of the problems.
- Traits are nicer than classes. You can still accomplish everything you can with classic classes, but you can also do more composition-based inheritance that classes don't give you by bolting traits onto anything you want.
- Rust's standard library (which you don't have to use if you're doing embedded work) has some of the nicest data structures, algorithms, OS primitives, I/O, filesystem, etc. of any language. It's had 40 years of mistakes to learn from and has some really great stuff in it. It's all wonderfully cross-platform too. I frequently write code for Windows, Mac, and Linux and it all just works out of the box. Porting is never an issue.
- Rust's functional programming idioms are super concise and easy to read. The syntax isn't terse.
- Cargo is the best package manager on the planet right now. You can easily import a whole host of library functionality, and the management of those libraries and their features is a breeze. It takes all of sixty seconds to find something you want and bring it into your codebase.
- You almost never need to think about system libraries and linking. No Makefiles, no Cmake, none of that build complexity or garbage. The compiler and cargo do all of the lifting for you. It's as easy as python. You never have to think about it.
I guess I'm trying to say that analogy is of limited use here.
The potential is definitely there, it looks like it might compete with C++ in the quant .
But we already have ocaml . From Jane Street, at least for me if you’re going to tell me, it’s time to learn an extremely difficult programming language, I need to see the money.
So far my highest paid programming job was in Python
I run two intermediate programming courses, one where we teach C++, and another where we teach Rust. In the Rust course, by the first week they are writing code and using 3rd party libraries; whereas in the C course we spend a lot of time dealing with linker errors, include errors, segfaults, etc. The learning curve for C/C++ gets steep very fast. But with Rust it's actually quite flat until you have to get into borrowing, and you can even defer that understanding with clone().
By the end of the semester in the C++ course, students' final project is a file server, they can get there in 14 weeks.
In Rust the final project is a server that implements LSP, which also includes an interpreter for a language they design. The submissions for this project are usually much more robust than the submissions for the C++ course, and I would attribute this difference to the designs of the languages.
It’s both identical and very different, depending on the level of detail you want to get into. Conceptually, it’s identical. Strictly speaking, the implementations differ in a few key ways.
Says who? Programming languages come in all shapes and sizes, and each has their tradeoffs. Rust's tradeoff is that the compiler is very opinionated about what constitutes a valid program. But in turn it provides comparable performance to C/C++ without many of the same bugs/security vulnerabilities.
- the object is destroyed
- the program core dumps
- it is a compile time error
Assuming the best possible outcome in case of missing information turns out to be a bad strategy in general.The answer is straightforward: bugs exist. Even in formally proven software, mistakes can be made. Nothing is perfect.
Additionally, memory safety is a property that when people talk about it, they mean by default. All languages contain some amount of non-proven unsafe code in their implementation, or via features like FFI. Issues can arise when these two worlds interact. Yet, real-world usage shows that these cases are quite few compared to languages without these defaults. The exceptions are also a source of the CVEs you’re talking about.
In JavaScript you can declare a variable, set it to 5 (number), and then set it to the "hello" (string), but that's not allowed in e.g. C. Is C constricting me too much because I have to do it in C's way?
Perhaps it will become prevalent enough that it will make sense in the future.
Placing restrictions on the programs a programmer can write is not abusive. The rules exist to ensure clarity, safety, performance, and design goals. In an abusive relationship, rules are created to control or punish behavior, often changing capriciously and without reason or consultation. By contrast, Rust is designed by a group of people who work together to advance the language according to a set of articulated goals. The rules are clear and do not change capriciously.
Abuse causes emotional trauma, isolation, and long-term harm. Rust may cause feelings of frustration and annoyance, it may make you a less efficient programmer, but using it does not cause psychological or physical harm found in abusive relationships.
1. println!() is a macro, so if you want to print anything out you need to grapple with what that ! means, and why println needs to be a macro in Rust.
2. Macros are important in Rust, they're not a small or ancillary feature. They put a lot of work into the macro system, and all Rust devs should aspire to use and understand metaprogramming. It's not a language feature reserved for the upper echelon of internal Rust devs, but a feature everyone should get used to and use.
> in-place mutability
I’m not sure what this means as.
> why encourage stack allocation
This is the same as C++, things are stack allocated by default and only put on the heap if you request it. Control is imporrant
> what problems with C++ does it solve and at what cost
The big one here is memory safety by default. You cannot have dangling pointers, iterator invalidation, and the like. The cost is that this is done via compile time checks, and you have to learn how to structure code in a way that demonstrates to the compiler that these properties are correct. That takes some time, and is the difficulty people talk about.
Rust also flips a lot of defaults that makes the language simpler. For example, in C++ terms, everything is trivially relocatable, which means Rust can move by default, and decided to eliminate move constructors. Technically Rust has no constructors at all, meaning there’s no rule of 3 or 5. The feeling of Rust code ends up being different than C++ code, as it’s sort of like “what if Modern C++ but with even more functional influence and barely any OOP.”
This is a strange one - I thought the rust compiler had famously helpful error messages, so why would I want to pore over my code looking for stupid typos when I can let the compiler find them for me? I am guaranteed to make stupid typos and want the computer to help me fix them.
> _who_ is the owner. Is it a stack frame?
I don’t think that it’s helpful to call a stack frame the owner in the sense of the borrow checker. If the owner was the stack frame, then why would it have to borrow objects to itself? The fact that the following code doesn’t compile seems to support that:
fn main() {
let a: String = "Hello".to_owned();
let b = a;
println!("{}", a); // error[E0382]: borrow of moved value: `a`
}
User lucozade’s comment has pointed out that the memory where the object lives is actually the thing that is being owned. So that can’t be the owner either.So if neither a) the stack frame nor b) the memory where the object lives can be called the owner in the Rust sense, then what is?
Could the owner be the variable to which the owned chunk of memory is bound at a given point in time? In my mental model, yes. That would be consistent with all borrow checker semantics as I have understood them so far.
Feel free to correct me if I’m not making sense.
I have written C for decades and love the language, but Rust has convinced me that we need to evolve beyond the 70s. There's no excuse anymore.
If you have already gotten to the journeyman or mastery experience level with C or C++ Rust is going to be easy to learn (it was for me). The concepts are simply being made explicit rather than implicit (ownership, lifetimes, traits instead of vtables, etc).
I had a better time writing a raycaster and later a path tracer, although by then I had learned to avoid dealing with the borrow checker…
Or many other locations, by many other authors, at many other times, or simultaneously.
'I'm not as good as learning things at you'
The "function coloring problem" people are harming entire ecosystems. In JS for example there are very popular frameworks thay choose to wrap async in sync execution by throwing when encountering async values and re-running parts of the program when the values resolve. The crazy part with these solutions trying to remove coloring, is they don't, they hide it (poorly). So instead of knowing what parts of a program are async you have no idea.
It's very different from a lot of the languages that people are typically using, but all the big features and syntax came from somewhere else. See:
>The type system and the borrowing mechanism are good examples. Unless you are a type system nerd a lot of that is just gobblygook to the average Python or Javascript user.
Well, yeah, but they generally don't like types at all. You won't have much knowledge to draw on if that's all you've ever done, unless you're learning another language in the same space with the same problems.
When you pass an argument to a function in Rust, or assign a value into a struct or variable, etc. you are moving it (unless it's Copy). That's extremely different from any other programming language people are used to, where things are broadly pass by value pass by reference and you can just do that as much as you want and the compiler doesn't care. It's as if in C++ you were doing std::move for every single argument or assignment.
And so as a programmer you have to shift to a mindset where you're thinking about that happening. This is profoundly unintuitive at first but becomes habit over time.
Then having that habit, it's actually a nice reasoning skill when you go back to working in other languages.
Destructors should be as simple and side-effect free as possible, with the exception of things like locks or file handles where the expectation is very clear that the object going out of scope will trigger a release action.
For me, I almost never write "for loops" and "if statements" in Rust; instead I use "functional iterators" and "match expressions", which interface with the borrow checker more nicely.
For example, iterating over an array while modifying it is a common pattern in imperative languages that compiles fine but often causes hard to reason about logic errors during runtime. In Rust, such a thing would cause compile time errors. So instead you rewrite to be more functional, it compiles, and then the magic is it just works, which is a common property of functional languages like Haskell.
IMO a lot of the consternation about the cost of the learning curve is because developers haven't realized once you get past it, the benefit is your code more often is just correct and therefore you run into fewer runtime bugs. In other languages you get the code compiling faster, but then you spend a great deal of time debugging things at runtime which you didn't anticipate at compile time.
I can implement the non-IO parts of Brainfuck with safe Rust, so it is Turing Complete. That doesn't change the fact that there are useful programs not expressible in it.
With functions for instance, sometimes people get carried away very straightforward linear code, and atomize it into a hundred little functions that all call one another, all in the name of reducing code duplication. Doing so is an abuse of functions, but one wouldn't say that functions should be used sparingly.
I think that macros are an area where many programmers don't have a lot of experience, and so they also throw out some best practices in order to wrap their heads around what they're doing.
But macros can be very helpful if properly applied, and Rust makes that more ergonomic, and safe, to do.
For example, if I have to write 100 functions that all are similar in structure, but have a slightly different function signature. I would reach for a macro. I don't think it reduces clarity at all, and it increases maintainability because I don't have to change code in 100 functions if an adjustment needs to be made.
Another area where macros is very useful is in creating DSLs within Rust. This is usually the domain of LISPs, but it's an ergonomic way to write tests little scripts.
I'm starting to wonder what I'm missing out by doing this. Not addressed in the article: Any tips for using the more abstract features, like Cow etc? I hit a problem with this today, where a lib used Cow<&str> instead of String, and the lifetime errors bubbled up into my code.
edit: I found this amusing about the article: They demo `String` as a param, and `&str` as a return type for triggering errors; you can dodge these errors simply by doing the opposite!
I probably wouldn't have been able to do that with Rust if I hadn't been an Erlang person previously. Rust seems like Erlang minus the high-overhead Erlangy bits plus extreme type signatures and conscious memory-handling. Erlang where only "zero-cost abstractions" were provided by the language and the compiler always runs Dialyzer.
The trouble with calling .lock() is that there is a potential for deadlock. There are some people working on static analysis for deadlock prevention, which is a dual of the static analysis for double borrow protection problem. We're maybe a PhD thesis or two from a solution. Here's some current research, out of Shanghai.[1] Outlines the theory, but code does not yet seem to be available.
"Flattening the learning curve" is perhaps a wrong metaphor - you can't actually change what needs to be learned; you can only make it easier to learn.
Saying something that is usually right and can be corrected later is a standard pedagogical approach - see https://en.wikipedia.org/wiki/Wittgenstein%27s_ladder . To extend the metaphor, the ladder is there to help you climb the learning curve.
As someone who self-learned Rust around 1.0, after half a year of high school level Java 6, I’ve never had the problems people (even now) report with concepts like the ownership system. And that despite Rust 1.0 being far more restrictive than modern Rust, and learning with a supposedly harder to understand version of “The Book”.
I think it’s because I, and other early Rust learners I’ve talked to about this, had little preconceived notions of how a programming language should work. Thus the restrictions imposed by Rust were just as “arbitrary” as any other PL, and there was no perceived “better” way of accomplishing something.
Generally the more popular languages like JS or Python allow you to mold the patterns you want to use sufficiently, so that they fit into it. At least to me with languages like Rust or Haskell, if you try to do this with too different concepts, the code gets pretty ugly. This can give the impression the PL “does not do what you need” and “imposes restrictions”.
I also think that this goes the other way, and might just be a sort of developed taste.
Even pretending that they did, I don't know if "appreciat[ing]" Rust means that you're saying that you "understand" it. It seems like choosing a different word in the second sentence of a two sentence argument may be an subtle way of hinting that you don't know Rust, although you've read articles about Rust and made judgements about it. If this is true, then it doesn't strongly support the first statement.
And people trip over this immediately when they start writing Rust, because that kind of code is pervasive in other environments. Thus statements like "Rust just doesn't like dangling pointers" are unhelpful, because while it's true it's not sufficient to write anything but the most trivial code.
[1] Or basically any graph-like data structure that can't be trivially proven to be acyclic; even lots of DAG-like graphs that "should" be checkable aren't.
Well, no; in my experience the difficulty overwhelmingly comes from thinking about the semantics. I.e.: these two clients currently share a mutable object; should they observe each others' mutations? Or: if I clone this object, will I regret not propagating the change to other clients?
Usually the advanced features come in when you’re looking for better performance. It helps performance a lot to use reference types (borrowed types) to eliminate deep copies (and allocations) with .clone() in a loop, for example.
Library authors usually don’t have the luxury of knowing how their code will be used downstream, so diligent authors try to make the code reasonably performant and use these advanced language features to do so. You never know if the consumer of your library will use your function in a hot loop.
Yeah, and that model is rather old: https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule In practice, complex software systems have been written in multiple languages for decades. The requirements of performance-critical low-level components and high-level logic are too different and they are in conflict.
> you give the compiler an opportunity to optimise across those unsafe usages
One workaround is better design of the DLL API. Instead of implementing performance-critical outer layers in C#, do so on the C++ side of the interop, possibly injecting C# dependencies via function pointers or an abstract interface.
Another option is to re-implement these smaller functions in C#. Modern .NET runtime is not terribly slow; it even supports SIMD intrinsics. You are unlikely to match the performance of an optimised C++ release build with LTO, but it’s unlikely to fall significantly short.
So the hard part isn't getting code to work, it is ensuring it is only working in the intended ways, even when your co-worker (or your future self) acts like an unhinged, unristricted idiot. And that means using enforced type systems, validation, strict rules.
If you are a beginner cobbling hobby programs an anything-goes approach to software may feel nice and like freedom, but beyond a certain level of complexity it will land you in a world of pain.
Any great C programmer whose code I ever had the pleasure of reading has a plethora of unwritten rules they enforce through their heads. And these rules exist there for a reason. When you have a language that enforces these rules for you, that gives you the freedom to dare more, not less, as certain things would be very risky with manual checking.
It is like the foam pit in extreme sports. While it is certainly more manly to break your neck in ten consecutive tripple-backflip tries, you are going to get there faster with a foam pit where you can try out things. And the foam pit transforms the whole scene, becaus people can now write code that before would crash and burn without feeling restricted. Funny how that goes.
> I think it’s because I, and other early Rust learners I’ve talked to about this, had little preconceived notions of how a programming language should work. Thus the restrictions imposed by Rust were just as “arbitrary” as any other PL, and there was no perceived “better” way of accomplishing something.
It depends. In some cases, you aren't missing anything. In others, you may lose a bit of efficiency by doing some otherwise un-needed copying. Depending on what you're doing that may be irrelevant.
> Any tips for using the more abstract features, like Cow etc? I hit a problem with this today, where a lib used Cow<&str> instead of String, and the lifetime errors bubbled up into my code.
You can do the same thing as you do with &str by calling into_owned on the Cow, you'd get a String back.
1) those who only know java
2) those who know java and were taught C++ by me. The way I teach that course, they are very familiar with pre-modern C++ because we also learn C.
3) those who know java and C++ but they learned it on their own.
It's the last group who has the most trouble. IME the exact issue they struggle with is the idea of shared mutable state. They are accustomed to handing out pointers to mutable state like candy, and they don't worry about race conditions, or all the different kinds of memory errors that can occur and lead to vulnerabilities. They don't write code that is easily refactored, or modular. They have a tendency to put everything into a header or one main.cpp file because they can't really get their head around the linking errors they get.
So when they try to write code this way in Rust, the very first thing they encounter is a borrow error related to their liberal sharing of state, and they can't understand why they can't just write code the way they want because it had been working so well for them before (in their very limited experience).
Pedagogically what I have to do is unteach them all these habits and then rebuild their knowledge from the ground up.
On some workloads (think calls not possible to inline within a hot loop), I found LTO to be a requirement for C code to match C# performance, not the other way around. We've come a long way!
(if you ask if there are any caveats - yes, JIT is able to win additional perf. points by not being constrained with SSE2/4.2 and by shipping more heavily vectorized primitives OOB which allow doing single-line changes that outpace what the average C library has access to)
Ah, well, a shame they didn't see the failing tests for the C++ code first ;)
(You could build your own custom data types that have type metadata in a shared header and an addition function that uses it, but then you're building your own custom language on top which isn't really the same thing.)
So yes C really does restrict you in some ways that Javascript doesn't.
Happens all the time in modern programming:
callee(foo_string + "abc")
Argument expression foo_string + "abc" constructs a new string. That is not captured in any variable here; it is passed to the caller. Only the caller knows about this.
This situation can expose bugs in a run-time's GC system. If callee is something written in a low level language that is resposible for indicating "nailed" objects to the garbage collector, and it forgets to nail the argument object, GC can prematurely collect it because nothing else in the image knows about that object: only the callee. The bug won't surface in situations like callee(foo_string) where the caller still has a reference to foo_string (at least if that variable is live: has a next use).
To have a safe reference to the cell of a vector, we need a "locative" object for that, which keeps track of v, and the offset 0 into v.
Yeah, I observed that too. As far as I remember, that code did many small memory allocations, and .NET GC was faster than malloc.
However, last time I tested (used .NET 6 back then), for code which churches numbers with AVX, my C++ with SIMD intrinsics was faster than C# with SIMD intrinsics. Not by much but noticeable, like 20%. The code generator was just better in C++. I suspect the main reason is .NET JIT compiler doesn’t have time for expensive optimisations.
Yeah, there are heavy constraints on how many phases there are and how much work each phase can do. Besides inlining budget, there are many hidden "limits" within the compiler which reduce the risk of throughput loss.
For example - JIT will only be able to track so many assertions about local variables at the same time, and if the method has too many blocks, it may not perfectly track them across the full span of them.
GCC and LLVM are able to leisurely repeat optimization phases where-as RyuJIT avoids it (even if some phases replicate some optimizations happened earlier). This will change once "Opt Repeat" feature gets productized[0], we will most likely see it in NativeAOT first, as you'd expect.
On matching codegen quality produced by GCC for vectorized code - I'm usually able to replicate it by iteratively refactoring the implementation and quickly testing its disasm with Disasmo extension. The main catch with this type of code is that GCC, LLVM and ILC/RyuJIT each have their own quirks around SIMD (e.g. does the compiler mistakenly rematerialize vector constant construction inside the loop body, undoing you hosting its load?). Previously, I thought it was a weakness unique to .NET but then I learned that GCC and LLVM tend to also be vulnerable to that, and even regress across updates as it sometimes happens in SIMD edge cases in .NET. But it is certainly not as common. What GCC/LLVM are better at is if you start abstracting away your SIMD code in which case it may need more help as once you start exhausting available registers due to sometimes less than optimal register allocation you start getting spills or you may be running in a technically correct behavior around vector shuffles where JIT needs to replicate portable behavior but fails to see your constant does not need it so you need to reach out for platform-specific intrinsics to work around it.
Rust flaming is just so terribly exhausting. No matter how reasonable and obvious a point is there's always someone willing to go to the mattresses in a fourty-comment digression about how Rust is infallible.
In contrast, there was hardly ever a computer engineering class where I could ignore raw memory addresses. Whether it was about optimizing a memory structure for cache layout or implementing some algorithm efficiently on a resource-anemic (mmu-less) microcontroller, memory usage was never automatic.
For me, programming with C++ was like building castles out of sand. I could never make them tall enough before they would collapse under their own weight.
But with Rust, I leveled up my abilities and built a program larger than I ever thought possible. And for that I'm thankful to Rust for being a language that actually makes sense to me.
And I suspect the people who are familiar with seeing something like `dict[str, int]` can map that onto something like `HashMap<String, i32>` without actually straining their brains, and grow from there.
And some people love that! It just ain't for everyone.
I'm seeking to draw a distinction between disliking rust for the real (or perceived) difficulty of learning/using it, and disliking it on principle, because you don't like it's trade-offs, approach to achieving it aims, syntax, type system, etc. This dichotomy is meaningful irrespective of the level of experience one has with Rust, beyond a certain level (and for the record I believe I have the requisite level of knowledge of rust to have an informed opinion on it).
For example, I don't know much Haskell. It seems to me (and to many other I read online) like it would be difficult to learn (and maybe use), although I'm familiar with functional languages in general. However, based on the little I've learned about it so far, it is a language I'd absolutely love to dig much deeper into as time permits, because almost everything about it makes so much sense to me.
Here's something amazing, I started to design my ideal language, before I started learning Haskell, and almost every language construct in Haskell I learn about seems to match exactly how I'd designed my language by coincidence (even down to keywords like "where", "do" blocks, etc.)
I suspect if you have C++ experience it's simpler to grokk, but most of the stuff I wrote was C and a bunch of the stuff Rust did were not familiar to me.
> // so it can't be grown in-place
> let v2 = v.clone();
I doubt rust guarantees that “Put something after v on the heap” behavior.
The whole idea of a heap is that you give up control over where allocations happen in exchange for an easy way to allocate, free and reuse memory.
I'm not making anything like the argument you seem to think I am. I'm only making a pragmatic observation about what real-world coding is like, based on my own experience.
My experience learning rust has been like a death by 1000 cuts, where there's so many small, simple problems that you just have to run into in the wild in order to understand. There's no simple set of rules that can prepare you for all of these situations.
This is the opposite of what I was suggesting though; those function pointers or abstract interfaces inhibit the kind of optimisations I was suggesting (e.g. inlining causing dead code removal of bounds checks, or inlining comparison functions into sort implementations, classics).
EDIT: that said, it's definitely still possible to not let it impact performance, it just takes being somewhat careful when making the interface, which you don't have to think about if it's all the same compiler/link step
Yes. There are lots of ways an object might be owned:
- a local variable on the stack
- a field of a struct or a tuple (which might itself be owned on the stack, or nested in yet another struct, or one of the other options below)
- a heap-allocating container, most commonly basic data structures like Vec or HashMap, but also including things like Box (std::unique_ptr in C++), Arc (std::shared_ptr), and channels
- a static variable -- note that in Rust these are always const-initialized and never destroyed
I'm sure there are others I'm not thinking of.
> Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first
Here are some example situations where you'd "pass by value" in Rust:
- You might be dealing with "Copy" types like integers and bools, where (just like in C or C++ or Go) values are easier to work with in a lot of common cases.
- You might be inserting something into a container that will own it. Maybe the callee gets a reference to that longer-lived container in one of its other arguments, or maybe the callee is a method on a struct type that includes a container.
- You might pass ownership to another thread. For example, the main() loop in my program could listen on a socket, and for each of the connections it gets, it might spawn a worker thread to own the connection and handle it. (Using async and "tasks" is pretty much the same from an ownership perspective.)
- You might be dealing with a type that uses ownership to represent something besides just memory. For example, owning a MutexGuard gives you the ability to unlock the Mutex by dropping the guard. Passing a MutexGuard by value tells the callee "I have taken this lock, but now you're responsible for releasing it." Sometimes people also use non-Copy enums to represent fancy state machines that you have to pass around by value, to guarantee whatever property they care about about the state transitions.
For what it's worth, it appears this was considered for Rust at some point but the devs decided against it. As described by Steve Klabnik in 2018 [0]:
> This was called “early drop”, and we didn’t implement it because of worries about unsafe code. Yes, the compiler could tell for safe code, and it would be fine, but unsafe code cannot, by definition, be checked.
[0]: https://users.rust-lang.org/t/drop-values-as-soon-as-possibl...
Each element is: key, value, linked list node for hash table bucket, linked list node for LRU. Hash table to look up element. Element is both a member of hash table and of linked list. Linked list is used as LRU for feeling memory when needed.
LRU never traversed but often needs removal and reinsertion.
I would say you're correct that ownership is something that only exists on the language level. Going back to the documentation: https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...
The first part that gives a hint is this
>Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks.
This clearly means ownership is a concept in the Rust language. Defined by a set of rules checked by the compiler.
Later:
>First, let’s take a look at the ownership rules. Keep these rules in mind as we work through the examples that illustrate them:
>
>*Each value in Rust has an owner*.
>There can only be one owner at a time.
>*When the owner goes out of scope*, the value will be dropped.
So the owner can go out of scope and that leads to the value being dropped. At the same time each value has an owner.
So from this we gather. An owner can go out of scope, so an owner would be something that lives within a scope. A variable declaration perhaps? Further on in the text this seems to be confirmed. A variable can be an owner.
>Rust takes a different path: the memory is automatically returned once the variable that owns it goes out of scope.
Ok, so variables can own values. And borrowed variables (references) are owned by the variables they borrow from, this much seems clear. We can recurse all the way down. What about up? Who owns the variables? I'm guessing the program or the scope, which in turn is owned by the program.
So I think variables own values directly, references are owned by the variables they borrow from. All variables are owned by the program and live as long as they're in scope (again something that only exists at program level).
Eschew flamebait. Avoid generic tangents. Omit internet tropes.
And then every time the underlying data moves, the program's runtime either needs to do a dynamic lookup of all pointers to that data and then iterate over all of them to point to the new location, or otherwise you need to introduce yet another layer of indirection (or even worse, you could use linked lists). Many languages exist in domains where they don't mind paying such a runtime cost, but Rust is trying to be as fast as possible while being as memory-safe as possible.
In other words, pick your poison:
1. Allow mutable data, but do not support direct interior references.
2. Allow interior references, but do not allow mutable data.
3. Allow mutable data, but only allow indirect/dynamically adjusted references.
4. Allow both mutable data and direct interior references, force the author to manually enforce memory-safety.
5. Allow both mutable data and direct interior references, use static analysis to ensure safety by only allowing references to be held when mutation cannot invalidate them.
Coming from that background these rules sound fantastic, theres been a lot of work put into c++ the past few years to try and make these things easier to enforce but it's still difficult to do right even with smart pointers.
Right. That's the key here. "Move semantics" can let you move something from the stack to the heap, or the heap to the stack, provided that a lot of fussy rules are enforced. It's quite common to do this. You might create a struct on the stack, then push it onto a vector, to be appended at the end. Works fine. The data had to be copied, and the language took care of that. It also took care of preventing you from doing that if the struct isn't safely move copyable.
C++ now has "move semantics", but for legacy reasons, enforcement is not strict enough to prevent moves which should not be allowed.
Curiously enough, this is also true of Python - just less obvious because it doesn't have any variables that aren't pointers, and most operators perform an implicit dereference.
What used to be called "string" in Python 2 is no longer called that, precisely so as to avoid unnecessary confusion. It's called "bytes", which is why the question of "why do I have to convert it to string?" doesn't arise.
Python 3.11.12 (main, Apr 8 2025, 14:15:29) [Clang 16.0.0 (clang-1600.0.26.6)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> "1" + 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate str (not "int") to str
For example, C++ differentiates between #1 and #2 (although it has woefully inadequate out-of-box support for #3).
Python (> 3) calls #1 bytes / bytearray, and calls #3 string. #2 is only really supported for FFI with C (i.e. ctypes.c_char_p and friends)
I suspect they'll eventually get a fast and live indicator to the user of where all the references are "going" as they type.
I have a book. I own it. I can read it, and write into the margin. Tear the pages off if I want. I can destroy it when I am done with it. It is mine.
I can lend this book in read only to you and many others at the same time. No modifications possible. Nobody can write to it, not even me. But we can all read it. And borrower can lend it recursively in read only to anybody else.
Or I can lend this book exclusively to you in read/write. Nobody but you can write on it. Nobody can read it; not even me; while you borrow it. You could shred the pages, but you cannot destroy the book. You can share it exclusively in read/write to anybody else recursively. When they are done, when you are done, it is back in my hands.
I can give you this book. In this case it is yours to do as you please and you can destroy it.
If you think low level enough, even the shared reference analogy describes what happens in a computer. Nothing is truly parallel when accessing a shared resource. We need to take turns reading the pages. The hardware does this quickly by means of cached copies. And if you don't want people tearing off pages, give then a read only book except for the margins.
Unions may be the better analog here.
> Curiously enough, this is also true of Python - just less obvious because it doesn't have any variables that aren't pointers, and most operators perform an implicit dereference.
That's more like an implementation detail than a quality of the language itself. A JavaScript implementation might do the same. For example, values in V8 are all either pointers to the heap except in the case of small (31-bit) integers, and a less optimized implementation might not even make that distinction and allocate everything on the heap. Similarly, a Python implementation might store SMIs directly where the pointer would be, like V8. PyPy uses both tagged pointers/SMIs and may even allocate registers for values.
Tagged unions would be, except they aren't first class in C.
The best analogy here would probably be OCaml polymorphic variants (https://dev.realworldocaml.org/variants.html)
> That's more like an implementation detail than a quality of the language itself.
It is not, though, because - unlike JavaScript - the fact that everything is a reference to object, and each object has a unique identity, is explicitly a part of Python semantics, and this is very visible in many cases. It's easy to observe it for "primitive" types as well simply by inheriting from them.
(OTOH the fact that a given Python implementation might still implement this by using tagged pointers etc is an implementation detail, because it is still required to behave as-if everything was an object).