←back to thread

Four Years of Jai (2024)

(smarimccarthy.is)
166 points xixixao | 1 comments | | HN request time: 0s | source
Show context
pcwalton ◴[] No.43726315[source]
> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.

That exists; it's called garbage collection.

If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.

replies(4): >>43726355 #>>43726431 #>>43727184 #>>43731326 #
mjburgess ◴[] No.43726431[source]
Well, 1) the temporary allocator strategy; and 2) `defer` kinda go against the spirit of this observation.

With (1) you get the benefits of GC with, in many cases, a single line of code. This handles a lot of use cases. Of those it doesn't, `defer` is that "other single line".

I think the issue being raised is the "convenience payoff for the syntax/semantics burden". The payoff for temp-alloc and defer is enormous: you make the memory management explicit so you can easily see-and-reason-about the code; and it's a trivial amount of code.

There feels something deeply wrong with RAII-style langauges.. you're having the burden to reason about implicit behaviour, all the while this behaviour saves you nothing. It's the worst of both worlds: hiddenness and burdensomeness.

replies(2): >>43726458 #>>43729593 #
hmry ◴[] No.43726458[source]
Neither of those gives memory safety, which is what the parent comment is about. If you release the temporary allocator while a pointer to some data is live, you get use after free. If you defer freeing a resource, and a pointer to the resource lives on after the scope exit, you get use after free.
replies(2): >>43726531 #>>43726581 #
francasso ◴[] No.43726531[source]
While technically true, it still simplifies memory management a lot. The tradeoff in fact is good enough that I would pick that over a borrowchecker.
replies(1): >>43726637 #
junon ◴[] No.43726637[source]
I don't understand this take at all. The borrow checker is automatic and works across all variables. Defer et al requires you remember to use it, and use it correctly. It takes more effort to use defer correctly whereas Rust's borrow checker works for you without needing to do much extra at all! What am I missing?
replies(3): >>43726754 #>>43726792 #>>43729431 #
vouwfietsman ◴[] No.43726754[source]
> The borrow checker is automatic and works across all variables.

Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".

That's what you're missing.

replies(2): >>43726882 #>>43727057 #
tialaramex ◴[] No.43727057[source]
What's hilarious about "fighting the borrow checker" is that it's about the lexical lifetime borrow checking, which went away many years ago - fixing that is what "Non-lexical lifetimes" is about, which if you picked up Rust in the last like 4-5 years you won't even know was a thing. In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.

Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.

Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)

As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".

replies(2): >>43727824 #>>43731131 #
leecommamichael ◴[] No.43731131[source]
I appreciate what you're saying, though isn't undefined behavior having to do with the semantics of execution as specified by the language? Most languages outright decline to specify multiple threads of execution, and instead provide it as a library. I think C started that trend. I'm not sure if Jai even has a spec, but the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.

This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.

replies(1): >>43731361 #
tialaramex ◴[] No.43731361[source]
> the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.

Nah, it's going to be Undefined. What's going on here is that there's an optimising compiler, and the way compiler optimisation works is you Define some but not all behaviour in your language and the optimiser is allowed to make any transformations which keep the behaviour you Defined.

Jai uses LLVM so in many cases the UB is exactly the same as you'd see in Clang since that's also using LLVM. For example Jai can explicitly choose not to initialize a variable (unlike C++ 23 and earlier this isn't the default for the primitive types, but it is still possible) - in LLVM I believe this means the uninitialized variable is poison. Exactly the same awful surprises result.

replies(1): >>43732471 #
leecommamichael ◴[] No.43732471[source]
Your reasoning appears to be:

1. because it is the kind of optimizing compiler you say it is

2. because it uses LLVM

… there will be undefined behavior.

Unless you worked on Jai, you can’t support point 1. I’m not even sure if you’re right under that presumption, either.

replies(1): >>43735566 #
1. tialaramex ◴[] No.43735566{3}[source]
> because it is the kind of optimizing compiler you say it is

What other kind of optimisations are you imagining? I'm not talking about a particular "kind" of optimisation but the entire category. Lets look at two real world optimisations from opposite ends of the scale to see:

1. Peephole removal of null sequences. This is a very easy optimisation, if we're going to do X and then do opposite-of-X we can do neither and have the same outcome which is typically smaller and faster. For example on a simple stack machine pushing register R10 and then popping R10 achieves nothing, so we can remove both of these steps from the resulting program.

BUT if we've defined everything this can't work because it means we're no longer touching the stack here, so a language will often not define such things at all (e.g. not even mentioning the existence of a "stack") and thus permit this optimisation.

2. Idiom recognition of population count. The compiler can analyse some function you've written and conclude that it's actually trying to count all the set bits in a value, but many modern CPUs have a dedicated instruction for that, so, the compiler can simply emit that CPU instruction where you call your function.

BUT You wrote this whole complicated function, if we've defined everything then all the fine details of your function must be reproduced, there must be a function call, maybe you make some temporary accumulator, you test and increment in a loop -- all defined, so such an optimisation would be impossible.