All have different costs and capabilities across implementation, performance and developer experience.
Then we have what everyone else besides Rust is actually going for, the productivity of automatic resource management (regardless of how), coupled with one of the type systems above, only for performance critical code paths.
Doesn't matter if it's purely "syntaxical" because the language is garbage collected, just the fact of specifying what owns what and be explicit about multiple references is great imo.
Some sort of effects systems can already be simulated with Kotlin features too.
Programming language theory is so interesting!
Maybe a dumb question but couldn't you just run multiple implementations in parallel threads and whichever finishes first with a positive result wins?
It has migrated from a scope-based borrow checker to non-lexical borrow checker, and has next experimental Polonius implementation as an option. However, once the new implementation becomes production-ready, the old one gets discarded, because there's no reason to choose it. Borrow checking is fast, and the newer ones accept strictly more (correct) programs.
You also have Rc and RefCell types which give you greater flexibility at cost of some runtime checks.
I'd just like to interject for a moment. What you’re referring to as "affine types", is in fact, Uniqueness Types. The difference has to do with how they interact with unrestricted types. In Rust, these "unrestricted types" are references (which can be used multiple times due to implementing Copy).
Uniqueness types allow functions to place a constraint on the caller ("this argument cannot be aliased when you pass it to me"), but places no restriction on the callee. This is useful for Rust, because (among other reasons) if a value is not aliased you can free it and be sure that you're not leaving behind references to freed data.
Affine types are the opposite - they allow the caller to place a restriction on the callee ("I'm passing you this value, but you may use it at most once"), which is not something possible to express in Rust's type system, because the callee is always free to create a reference from its argument and pass that reference to multiple functions..
This is often explained via the "do not use more than once rule", but that's not the actual definition, and as your example shows, following that simplified explanation to the letter can cause confusion.
> because the callee is always free to create a reference from its argument and pass that reference to multiple functions..
Passing a reference is not the same thing as passing the actual value, so this does not contradict affinity.
"Rust", in this context, is "merely" "the usual invariants that people want" and "a suite of optimizations that assume those usual invariants, but not more or less".
I agree that passing a reference is not the same thing as passing the actual value. If it were, there would really be no point to references. However, it does contradict affinity. Specifically, the fact that multiple references can be created from the same value, combined with the properties of references, contradicts affinity.
> At its core, "affine" means that the type system has exchange and weakening but not contraction, and that exactly characterizes Rust's type system.
Well, the rust type system certainly does support contraction, as I can use a reference multiple times. So what is that if not contraction? It seems like rust at least does support contraction for references.
But in practice, having absolutely no contraction is not a very useful definition of affine, because no practical programming language would ever satisfy it. It prohibits too much and the language would not even be turing complete. Instead, there is usually an "affine world" and an "exponential world". (Exponential meaning "unrestricted" values that you can do whatever you want with). And the convention is that values can go from the exponential world to the affine world, but not back. So a function taking an affine value can be passed any value, but must use in in an affine way, and meanwhile but a function taking an exponential (unrestricted) value can only be passed exponential and not an affine value.
If you don't believe me, you can try using linear haskell, and notice that a function taking a linear argument can be passed a non-linear argument, but not the other way around.
If you interpret Rust's type system this way, it's natural to interpret references as exponentials. But references have the opposite convention. You can go from owned values to references, but not the other way around, which is precisely the opposite situation as the convention around linear/affine type systems. Because these systems feel very different to use and enforce very different properties, I do think it's important that we have separate names for them rather than referring to both as "affine". And the usual name for the rust-like system is "uniqueness types", see https://docs.idris-lang.org/en/latest/reference/uniqueness-t... or https://en.wikipedia.org/wiki/Uniqueness_type .
Good question! For shared references, the answer is that they are `Copy`, so they indeed have contraction. Affinity just means that contraction is not a universal property, but some types/propositions may still have contraction. For mutable references, you can't actually use them multiple times. However, there is a desugaring phase going on before affinity gets checked, so uses of mutable references `r` get replaced by `&mut *r` everywhere. That's not using contraction, it's not literally passing `r` somewhere, it is calling a particular (and interesting) operating on `r` ("reborrowing").
Rust is not just an affine system, it is an affine system extended with borrowing. But I think it is still entirely fair to call it an affine system, for the simple fact that the language will prevent you from "using" a variable twice. "reborrowing" is just not a case of "using", it is its own special case with its own rules.
> But in practice, having absolutely no contraction is not a very useful definition of affine,
Obviously Rust has a class of "duplicable" types, called `Copy`. That's besides the point though.
> If you interpret Rust's type system this way, it's natural to interpret references as exponentials.
Why would that be natural? Mutable references are not even duplicable, so what you say makes little sense for references in general. Maybe you mean shared references -- those are just an example of a duplicable type.
Rust doesn't have a modality in its type system that would make every type duplicable, so there is no equivalent to exponentials. (In particular, `&T` isn't a modality around `T`. It's a different type, with a different representation. And as you noted, even if it were a modality, it wouldn't correspond to exponentials.)
But a type system can be affine/linear without having exponentials so I don't understand the point of this remark.
Uniqueness types seem to be all about how many references there are to a value. You can use linear/affine types to enforce such a uniqueness property (and that is indeed what Rust does), but that doesn't take away from the fact that you have a linear/affine type system.
> Because these systems feel very different to use and enforce very different properties,
I can't talk about the "feel" as I never programmed in an affine language (other than Rust ;), but in terms of the properties, what Rust does is extremely closely related to affine logics: the core property being enforced is that things do not get duplicated. My model of Rust, RustBelt, uses an affine separation logic to encode the properties of the Rust type system, and there's a lot of overlap between separation logic and linear logic. So we have further strong evidence here that it makes perfect sense to call Rust an affine language.
Also, the claims about Curry-Howard correspondence are wrong. It does not prove that rust is an affine language: https://liamoc.net/forest/loc-000S/index.xml
But Swift DOES have affine types with the "Non copyable" types that doesn't allow contraction.
And some people like to claim that the Curry-Howard correspondence proves something about their type system, but this is only true for dependently typed languages.
And the proofs aren't about program behavior.
Swift doesn't have references like Rust, and you can't even have unsafe raw pointers to variables without producing a dangling pointer, but this makes Swift more restrictive and less powerful than Rust.
There is a dual nature of linearity and uniqueness, and it only arises when there are expressions that are not linear/not unique. At the same time, they have a lot in common, so we do not have a situation that warrants separate names. It is even possible to combine both in the same type system, as the authors demonstrate.
Taken from the paper:
"Linearity and uniqueness behave dually with respect to composition, but identically with respect to structural rules, i.e., their internal plumbing."
The point of Affine logic is that it doesn't allow universal, unconstrained contraction, not that you can never do an operation that has the same properties that contraction would have in some circumstances. The same is true of Rust's type system.
https://users.rust-lang.org/t/polonius-is-more-ergonomic-tha...
>I recommend watching the video @nerditation linked. I believe Amanda mentioned somewhere that Polonius is 5000x slower than the existing borrow-checker; IIRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.
fn split_at(&self, mid: usize) -> (&[T], &[T])
fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T])
What might their triples look like in separation logic?The borrow checker is supposed to be a sound static analysis, yes. I think Ralf Jung's comment at https://news.ycombinator.com/item?id=44511416 says soundness hasn't been proved relative to tree borrows yet.
> Maybe a dumb question but couldn't you just run multiple implementations in parallel threads and whichever finishes first with a positive result wins?
IIUC when you're compiling reasonably-sized programs you're already using all the cores, so parallelizing here doesn't seem like it's going to net you much gain, especially if it means you're doing a lot of extra work.
No, they are not. You're not using a value more than once, you are borrowing it, which is an extension of affine logic but keeps true to the core principles of affinity. I have modeled multiple shared references in an affine logic (look up RustBelt), i.e. in a logic that doesn't have contraction, so we have very hard evidence for this claim.
(The other part you said about contraction and affine logics has already been successfully rebutted in some other replies so I won't repeat their points.)
That's just wrong. Affine logic totally can have contraction for some propositions.
Also, CH totally exists for non-dependently-typed languages -- for instance, there is a beautiful correspondence between the simply-typed lambda calculus and propositional logic. Please stop repeating claims that you apparently do not understand.
> when you're compiling reasonably-sized programs you're already using all the cores
Only on full rebuilds. I would assume most build jobs with a human in the loop only compile a handful of crates at once.
In fact as CPUs get more and more parallel we'll cross the threshold where thread count surpasses work items more often and then will come the time to get creative ;)
These functions do not memory accesses, they are both from an operational perspective essentially:
p, n -> (p, p + n)
The separation logics I've seen have all have what we might call a strong extensional calculi / shallow DSL embedding flavor. What that means is roughly that there is strong distinction between the "internal" program under scrutiny, and fairly arbitrary external reasoning about it.
I bring this up in order to say that we're very far from "what is the principal type of this expression?" type questions. There are many, many, ways one might type split_at/_mut depending on what the context requires. The interesting thing about these in Rust is really not the functions themselves, but & and &mut. Those types are stand-ins for some family of those myriad potential contexts, in the way that interfaces are always reifications of the possibilities of what the component on the other side of the interface might be.
In the "menagerie of separation logics" I was originally thinking of, there may not be singular & and &mut that work for all purposes. The most reusable form split_at indeed may be tantamount to the simple arithmetic pure function I wrote above, leaving to the caller the proof of showing whatever properties it needs are carried over from the original pointer to the new ones. Given the arithmetic relations, and the knowledge that nothing else is happening (related to the famous frame rule), the caller should be able to do this.