Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible
Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible
I am just going to quote what pcwalton said the other day that perhaps answer your question.
>> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.
> That exists; it's called garbage collection.
>If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.
I ask because I am obvious blind to other cases - that's what I'm curious about! I generally find the &s to be a net help even without mem safety ... They make it easier to reason about structure, and when things mutate.
The refusal to accept code that the developer knows is correct, simply because it does not fit how the borrow checker wants to see it implemented. That kind of heavy-handed and opinionated supervision is overhead to productivity. (In recent times, others have taken to saying that Rust is less "fun.")
When the purpose of writing code is to solve a problem and not engage in some pedantic or academic exercise, there are much better tools for the job. There are also times when memory safety is not a paramount concern. That makes the overhead of Rust not only unnecessary but also unwelcome.
Seeing how most people hate the lifetime annotations, yes. For the foreseeable future.
People want unlimited freedom. Unlimited freedom rhymes with unlimited footguns.
The aliasing rules in Rust are also pretty strict. There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code. This usually indicates a design issue in your program but sometimes you just want to throw together some code to solve an immediate problem. The extra friction from the borrow checker makes it less attractive to use Rust for these kinds of programs.
How do you know you haven't been writing unsafe code for years, when C unsafe guidelines have like 200 entries[1].
[1]https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definit... (Annex J.2 page 490)
You could do that using Cell or RefCell. I agree that it makes it more cumbersome.
This is similar to creating a broadly-used data structure and realizing that some field has to be optional. Option<T> will require you to change everything touching it, and virally spread through all the code that wanted to use that field unconditionally. However, that's not the fault of the Option syntax, it's the fault of semantics of optionality. In languages that don't make this "miserable" at compile time, this problem manifests with a whack-a-mole of NullPointerExceptions at run time.
With experience, I don't get this "oh no, now there's a lifetime popping up everywhere" surprise in Rust any more. Whether something is going to be a temporary view or permanent storage can be known ahead of time, and if it can be both, it can be designed with Cow-like types.
I also got a sense for when using a temporary loan is a premature optimization. All data has to be stored somewhere (you can't have a reference to data that hasn't been stored). Designs that try to be ultra-efficient by allowing only temporary references often force data to be stored in a temporary location first, and then borrowed, which doesn't avoid any allocations, only adds dependencies on external storage. Instead, the design can support moving or collecting data into owned (non-temporary) storage directly. It can then keep it for an arbirary lifetime without lifetime annotations, and hand out temporary references to it whenever needed. The run-time cost can be the same, but the semantics are much easier to work with.
My issue is with ergonomics and performance. In my experience with a range of languages, the most performant way of writing the code is not the way you would idiomatically write it. They make good performance more complicated than it should be.
This holds true to me for my work with Java, Python, C# and JavaScript.
What I suppose I’m looking for is a better compromise between having some form of managed runtime vs non managed
And yes, I’ve also tried Go, and it’s DX is its own type of pain for me. I should try it again now that it has generics
This separation is also why it is basically impossible to make apples-to-apples comparisons between languages.
Messy things I've hit (from ~5KLoC of Rust; I'm a Rust beginner, I primarily do C) are: cyclical references; a large structure that needs efficient single-threaded mutation while referenced from multiple places (i.e. must use some form of cell) at first, but needs to be sharable multithreaded after all the mutating is done; self-referential structures are roughly impossible to move around (namely, an object holding &-s to objects allocated by a bump allocator, movable around as a pair, but that's not a thing (without libraries that I couldn't figure out at least)); and refactoring mutability/lifetimes is also rather messy.
What kind of idiomatic or unidiomatic C# do you have in mind?
I’d say if you are okay with GC side effects, achieving good performance targets is way easier than if you care about P99/999.
Although the more usual pattern here is to ditch pointers and instead have a giant array of objects referring to each other via indices into said array. But this is effectively working around the borrow checker - those indices are semantically unchecked references, and although out-of-bounds checks will prevent memory corruption, it is possible to store index to some object only for that object to be replaced with something else entirely later.
That's what generational arenas are for, at the cost of having to check for index validity on every access. But that cost is only in comparison to "keep a pointer in a field" with no additional logic, which is bug-prone.
Writing provable anything is hard because it forces you to think carefully about that. You can no longer reason by going into flow mode, letting fast and incorrect part of the brain take over.
This means it rejects, by definition alone, bug-free code because that bug free code uses a pattern that is not acceptable.
IOW, while Rust rejects code with bugs, it also rejects code without bugs.
It's part of the deal when choosing Rust, and people who choose Rust know this upfront and are okay with it.
That is not true by definition alone. It is only true if you add the corollary that the patterns which rustc prevents are sometimes bug-free code.
That corollary is only required in the cases that a pattern is unable to produce bug-free code.
In practice, there isn't a pattern that reliably, 100% of the time and deterministically produces a bug.