I am perfectly fine for it to remain a closed alpha while Jonathan irons out the design and enacts his vision, but I hope its source gets released or forked as free software eventually.
What I am curious about, which is how I evaluate any systems programming language, is how easy it is to write a kernel with Jai. Do I have access to an asm keyword, or can I easily link assembly files? Do I have access to the linker phase to customize the layout of the ELF file? Does it need a runtime to work? Can I disable the standard library?
That exists; it's called garbage collection.
If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.
Tbh, I think a lot of open source projects should consider following a similar strategy --- as soon as something's open sourced, you're now dealing with a lot of community management work which is onerous.
There is a prominent contributor to HN whose profile says they dream of a world where all languages offer automatic memory management and I think about that a lot, as a low-level backend engineer. Unless I find myself writing an HFT bot or a kernel, I have zero need to care about memory allocation, cycles, and who owns what.
Productivity >> worrying about memory.
This is a common misconception. You can release the source code of your software without accepting contributions.
When you're a somewhat famous programmer releasing a long anticipated project, there's going to be a lot of eyes on that project. That's just going to come with hassle.
With (1) you get the benefits of GC with, in many cases, a single line of code. This handles a lot of use cases. Of those it doesn't, `defer` is that "other single line".
I think the issue being raised is the "convenience payoff for the syntax/semantics burden". The payoff for temp-alloc and defer is enormous: you make the memory management explicit so you can easily see-and-reason-about the code; and it's a trivial amount of code.
There feels something deeply wrong with RAII-style langauges.. you're having the burden to reason about implicit behaviour, all the while this behaviour saves you nothing. It's the worst of both worlds: hiddenness and burdensomeness.
And how would that compiler work? Magic? Maybe clairvoyance?
it's not even contributions, but that other people might start asking for features, discuss direction independently (which is fine, but jblow has been on the record saying that he doesn't want even the distraction of such).
The current idea of doing jai closed sourced is to control the type of people who would be able to alpha test it - people who would be capable of overlooking the jank, but would have feedback for fundamental issues that aren't related to polish. They would also be capable of accepting alpha level completeness of the librries, and be capable of dissecting a compiler bug from their own bug or misuse of a feature etc.
You can't get any of these level of control if the source is opened.
pcw's comment was about tradeoffs programmers are willing to make -- and paints the picture more black-and-white than the reality; and more black and white than OP.
In games you have 16ms to draw billion+ triangles (etc.).
In web, you have 100ms to round-trip a request under abitarily high load (etc.)
Cases where you cannot "stop the world" at random and just "clean up garbage" are quite common in programming. And when they happen in GC'd languages, you're much worse off.
(As with any low-pause collector, the rest of your code is uniformly slower by some percentage because it has to make sure not to step on the toes of the concurrently-running collector.)
Open source is not a philosophy, it is a license.
Well, it is the public internet, people are free to discuss whatever they come across. Just like you're free to ignore all of them, and release your software Bellard-style (just dump the release at your website, see https://bellard.org/) without any bug tracker or place for people to send patches to.
Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".
That's what you're missing.
For the rest you need more granular manual memory management, and defer is just a convenience in that case compared to C.
I can have graphs with pointers all over the place during the phase, I don't have to explain anything to a borrow checker, and it's safe as long as you are careful at the phase boundaries.
Note that I almost never have things that need to survive a phase boundary, so in practice the borrow checker is just a nuissance in my work.
There other use cases where this doesn't apply, so I'm not "anti borrow checker", but it's a tool, and I don't need it most of the time.
Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken, it should be a few seconds at most for full rebuild even with a decent amount of template usage. This makes me suspect this author doesn't have much C++ experience, as this should have been obvious to them.
I do like the build script being in the same language, CMake can just die.
The metaprogramming looks more confusing than C++, why is "sin"/"cos" a string?
Based on this article I'm not sure what Jai's strength is, I would have assumed metaprogramming and SIMD prior, but these are hardly discussed, and the bit on metaprogramming didn't make much sense to me.
Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.
Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)
As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".
1. The borrow checker is indeed a free lunch 2. Your domain lends itself well to Rust, other domains don't 3. Your code is more complicated than it would be in other languages to please the borrow checker, but you are unaware because its just the natural process of writing code in Rust.
There's probably more things that could be going on, but I think this is clear.
I certainly doubt its #1, given the high volume of very intelligent people that have negative experiences with the borrow checker.
You say this now but between 2013 - around 2023, The definition of Open source is that if you dont engage with the community and dont accept PRs it is not open source. And people will start bad mouth the project around the internet.
Working on a project is hard enough as it is.
That being said, I do see an issue with globally scoped imports. It would be nice to know if imports can be locally scoped into a namespace or struct.
In all, whether it's compete or coexist (I don't believe the compiler for Jon's language can handle other languages so you might use Zig to compile any C or C++ or Zig), it will be nice to see another programming language garner some attention and hopefully quell the hype of others.
Keeping things closed source is one way of indicating that. Another is to use a license that contains "THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED [...]" and then let people make their own choices. Just because something is open source doesn't mean it's ready for widespread adoption.
"So, put simply, yes, you can shoot yourself in the foot, and the caliber is enormous. But you’re being treated like an adult the whole time"
That is, those of us who've noticed we make mistakes aren't adults we're children and this is a proper grown-up language -- pretty much the definition of condescending.
As an industry we need to worry about this more. I get that in business, if you can be less efficient in order to put out more features faster, your dps[0] is higher. But as both a programmer and an end user, I care deeply about efficiency. Bad enough when just one application is sucking up resources unnecessarily, but now it's nearly every application, up to and including the OS itself if you are lucky enough to be a Microsoft customer.
The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same. No new features have really revolutionized how I use the computer, so from my perspective all we have done is make everything slower in lockstep with hardware advances.
[0] dollars per second
Agreed, 45 minutes is insane. In my experience, and this does depend on a lot of variables, 1 million lines of C++ ends up taking about 20 minutes. If we assume this scales linearly (I don't think it does, but let's imagine), 19k lines should take about 20 seconds. Maybe a little more with overhead, or a little less because of less burden on the linker.
There's a lot of assumptions in that back-of-the-envelope math, but if they're in the right ballpark it does mean that Jai has an order of magnitude faster builds.
I'm sure the big win is having a legit module system instead of plaintext header #include
NLL's final implementation (Polonius) hasn't landed yet, and many of the original cases that NLL were meant to allow still don't compile. This doesn't come up very often in practice, but it sure sounds like a hole in your argument.
What does come up in practice is partial borrowing errors. It's one of the most common complaints among Rust programmers, and it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
Just like any programming paradigm, it takes time to get used to, and that time varies between people. And just like any programming paradigm, some people end up not liking it.
That doesn't mean it's a "free lunch."
For some people. For example, I personally have never had a partial borrowing error.
> it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
This is not for sure. That is, while it's code that could work, it's not obviously clear that it's correct. Rust cares a lot about the contract of function signatures, and partial borrows violate the signature, that's why they're not allowed. Some people want to relax that restriction. I personally think it's a bad idea.
People want to be able to specify partial borrowing in the signatures. There have been several proposals for this. But so far nothing has made it into the language.
Just to give an example of where I've run into countless partial borrowing problems: Writing a Vulkan program. The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state. (of course, this is not safe, because you could have accidental mutable aliasing).
But in Rust, that just doesn't work. You get countless errors like "Can't call self.resize_framebuffer() because you've already borrowed self.grass_texture" (even though resize_framebuffer would never touch the grass texture), "Can't call self.upload_geometry() because you've already borrowed self.window.width", and so on.
So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
It would be so much nicer if you could instead annotate that resize_framebuffer only borrows self.framebuffer, and no other part of self.
Which games are these? Are you referring to games written in Unity where the game logic is scripted in C#? Or are you referring to Minecraft Java Edition?
I seriously doubt you would get close to the same performance in a modern AAA title running in a Java/C# based engine.
Releasing it when you're not ready to collect any upside from that decision ("simply ignore them") but will incur all the downside from a confused and muddled understanding of what the project is at any given time sounds like a really bad idea.
Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't.
That rule prevents memory corruption, but it outlaws many programs that break the rule yet actually are otherwise memory safe, and it also outlaws programs that follow the rule but wherein the compiler isn't smart enough to prove that the rule is being followed. That annoyance is the main thing people are talking about when they say they are "fighting the borrow checker" (when comparing Rust with languages like Odin/Zig/Jai).
Feels like there is a beneficial property in there.
(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)
You'd be really hard pressed to find somebody who doesn't consider SQLite to be open source.
That's correct. That's why I said "Some people want to relax that restriction. I personally think it's a bad idea."
> The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state.
Yes, I think that this style of programming is not good, because it creates giant balls of aliasing state. I understand that if the library you use requires you to do this, you're sorta SOL, but in the programs I write, I've never been required to do this.
> So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
I am not sure that it's a good idea to change the language to make using poorly designed APIs easier. I also understand that reasonable people differ on this issue.
How does calling an anonymous function in JS cause memory allocations?
https://dev.epicgames.com/documentation/en-us/unreal-engine/...
You're right that there is a difference between "engine written largely in C++ and some parts are GC'd" vs "game written in Java/C#", but it's certainly not unheard of to use a GC in games, pervasively in simpler ones (Heck, Balatro is written in Lua!) and sparingly in even more advanced titles.
You have the legal right to use, share, modify, and compile, SQlite's source. If it were Source Available, you'd have the right to look at it, but do none of those things.
"... Much like how object oriented programs carry around a this pointer all over the place when working with objects, in Jai, each thread carries around a context stack, which keeps track of some cross-functional stuff, like which is the default memory allocator to ..."
It reminds me of GoLang's context, and it should've existed in any language dealing with multi-threading, as a way of carrying info about parent thread/process (and tokens) for trace propagation, etc.
In JS world async/await was never about performance, it was always about having more readable code than Promise chain spagetti.
Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.
I doubt we're going to come to an agreement here, though, so I'll leave it at that.
> reading wavemode comment above
This is true for `&mut T` but that isn't directly related to arenas. Furthermore, you can have multiple mutable aliased references, but you need to not use `&mut T` while doing so: you can take advantage of some form of internal mutability and use `&T`, for example. What is needed depends on the circumstances.
Not even.
It used to be that when you clicked a button, things happened immediately, instead of a few seconds later as everything freezes up. Text could be entered into fields without inputs getting dropped or playing catch-up. A mysterious unkillable service wouldn't randomly decide to peg your core several times a day. This was all the case even as late as Windows 7.
> Do I have access to an asm keyword,
Yes, D has a builtin assembler
> or can I easily link assembly files?
Yes
> Do I have access to the linker phase to customize the layout of the ELF file?
D uses standard linkers.
> Does it need a runtime to work?
With the -betterC switch, it only relies on the C runtime
> Can I disable the standard library?
You don't need the C runtime if you don't call any of the functions in it.
To me this raises the question of whether this is a growing trend, or whether it's simply that languages staying closed source tends to be a death sentence for them in the long term.
Cite?
This problem statement is also such a weird introduction to specifically this new programming language. Yes, compiled languages with no GC are faster than the alternatives. But the problem is and was not the alternatives. Those alternatives fill the vast majority of computing uses and work well enough.
The problem is compiled languages with no GC, before Rust, were bug prone, and difficult to use safely.
So -- why are we talking about this? Because jblow won't stop catastrophizing. He has led a generation of impressionable programmers to believe that we in some dark age of software, when that statement couldn't be further from the truth.
What chii is suggesting is open sourcing Jai now may cause nothing but distractions for the creator with 0 upside. People will write articles about its current state, ask why it's not like their favorite language or doesn't have such-and-such library. They will even suggest the creator is trying to "monopolize" some domain space because that's what programmers do to small open source projects.
That's a completely different situation from Sqlite and Linux, two massively-funded projects so mature and battle-tested that low-effort suggestions for the projects are not taken seriously. If I write an article asking Sqlite to be completely event-source focused in 5 years, I would be rightfully dunked on. Yet look at all the articles asking Zig to be "Rust but better."
I think you can look at any budding language over the past 20 years and see that people are not kind to a single maintainer with an open inbox.
This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.
Yeah, that's what I figured. I don't know JS internals all too well, so I thought he might be hinting at some unexpected JS runtime quirk.
This is true but there is a middle ground. You use a reasonably fast (i.e. compiled) GC lang, and write your own allocator(s) inside of it for performance-critical stuff.
Ironically, this is usually the right pattern even in non-GC langs: you typically want to minimize unnecessary allocations during runtime, and leverage techniques like object pooling to do that.
IOW I don't think raw performance is a good argument for not using GC (e.g. gamedev or scientific computing).
Not being able to afford the GC runtime overhead is a good argument (e.g. embedded programs, HFT).
jblow's words are not the Gospel on high.
Nah, it's going to be Undefined. What's going on here is that there's an optimising compiler, and the way compiler optimisation works is you Define some but not all behaviour in your language and the optimiser is allowed to make any transformations which keep the behaviour you Defined.
Jai uses LLVM so in many cases the UB is exactly the same as you'd see in Clang since that's also using LLVM. For example Jai can explicitly choose not to initialize a variable (unlike C++ 23 and earlier this isn't the default for the primitive types, but it is still possible) - in LLVM I believe this means the uninitialized variable is poison. Exactly the same awful surprises result.
Have you actually used modern software?
There's a great rant about Visual Studio debugger which in recent versions cannot even update debugged values as you step through the program unlike its predecessors: https://youtu.be/GC-0tCy4P1U?si=t6BsHkHhoRF46mYM
And this is professional software. There's state of personal software is worse. Most programs cannot show a page of text with a few images without consuming gigabytes of RAM and not-insignificant percentages of CPU.
In a recent interview he mentioned they are aiming for a release later this year: https://youtu.be/jamU6SQBtxk?si=nMTKbJjZ20YFwmaC
I'm sure jblow is having the same fears, and I hope to be wrong.
Still, it's fun to be remembering the first few videos about "hey, I have those ideas for a language". Great that he could afford to work on it.
Sometimes, mandalas are what we need.
The least you can say is that he is _opinionated_. Even his friend Casey Muratori is "friendly" in comparison, at least trying to publish courses to elevate us masses of unworthy typescript coders to the higher planes of programming.
Jblow just want you to feel dumb for not programming right. He's unforgiving, Socrate's style.
The worst thing is : he might be right, most of the time.
We would not know, cause we find him infuriating, and, to be honest, we're just too dumb.
What they're describing is the downstream effect of not designing APIs that way. If you could have a single giant GraphicsState and define everything as a method on it, you would have to pass around barely any arguments at all: everything would be reachable from the &mut self reference. And either with some annotations or with just a tiny bit of non-local analysis, the compiler would still be able to ensure non-aliasing usage.
"functions that each take 20 parameters and return 5 values" is what you're forced to write in alternative to that, to avoid partial borrowing errors: for example, instead of a self.resize_framebuffer() method, a free function resize_framebuffer(&mut self.framebuffer, &mut self.size, &mut self.several_other_pieces_of_self, &mut self.borrowed_one_by_one).
I agree that the severity of this issue is highly dependent on what you're building, but sometimes you really do have a big ball of mutable state and there's not much you can do about it.
Emphasis on even. It can have such rights, or not, the term may still apply regardless.
>Text could be entered into fields without inputs getting dropped or playing catch-up
This has been a complaint since the DOS days that has always been around from my experience. I'm pretty sure it's been industry standard from its inception that most large software providers make the software just fast enough the users don't give up and that's it.
Take something like notepad in opening files. Large files take forever. Yet I can pop open notepad++ from some random small team and it opens the same file quickly.
They get a few "true believer" followers, give them special privileges like beta access (this case), special arcane knowledge (see Urbit), or even special standing within the community (also Urbit, although many other languages where the true believers are given authority over community spaces like discord/mailing list/irc etc.).
I don't associate in these spaces because I find the people especially toxic. Usually they are high drama because the focus isn't around technical matters but instead around the cult leader and the drama that surrounds him, defending/attacking his decisions, rationalizing his whims, and toeing the line.
Like this thread, where a large proportion is discussion about Blow as a personality rather than the technical merit of his work. He wants it that way, not so say that his work doesn't have technical merit, but that he'd rather we be talking about him.
One can be put off by whatever one is put off by. I've gotten to the point where I realized that I don't need to listen to everyone's opinion. Everyone's got some. If one opinion is important, it will like the shared by more than one person. From that it follows that there's no need to subject oneself to specific people one is put off by. Or put another way: if there's an actionable critique, and two people are stating it, and one is a dick and the other isn't, I'll pay attention to the one who isn't a dick. Life's to short to waste it with abrasive people, regardless of whether that is "what is in their heart" or a constructed persona. The worst effect of the "asshole genius" trope is that it makes a lot of assholes think they are geniuses.
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.
But it used to read
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from unknown persons.
(I randomly picked a date and found https://web.archive.org/web/20200111071813/https://sqlite.or... )
But, no, the hubris of the language creator, whose arrogance is probably close to a few nano-Dijkstras, makes it entirely possible that he prefers _not_ releasing a superior language, out of spite for the untermenschen that would "desecrate" it by writing web servers inside it.
So I'm pretty convinced now that he will just never release it except to a couple of happy fews, and then he will die of cardio vascular diseases because he spent too much time sitting in a chair streaming nonsense, and the world will have missed an opportunity.
Then again, I'm just sad.
As John Stewart said: "on the bright side, I'm told that at some point the sun will turn supernova, and we'll all die."
What if you want to put a resource object (which needs a cleanup on destruction) into a vector then give up ownership of the vector to someone?
I write code in go now after moving from C++ and God do I miss destructors. Saying that defer eliminates need for RAII triggers me so much
Sadly, there exists a breed of developer that is manipulative, obnoxious, and loves to waste time/denigrate someone building something. Relatively few people are genuinely interested (like the OP) in helping to develop the thing, test builds, etc. Most just want to make contributions for their Github profile (assuming OSS) or exercise their internal demons by projecting their insecurities onto someone else.
From all of the JB content I've seen/read, this is a rough approximation of his position. It's far less stressful to just work on the idea in relative isolation until it's ready (by whatever standard) than to deal with the random chaos of letting anyone and everyone in.
This [1] is worth listening to (suspending cynicism) to get at the "why" (my editorialization, not JB).
Personally, I wish more people working on stuff were like this. It makes me far more likely to adopt it when it is ready because I can trust that the appropriate time was put in to building it.
1. because it is the kind of optimizing compiler you say it is
2. because it uses LLVM
… there will be undefined behavior.
Unless you worked on Jai, you can’t support point 1. I’m not even sure if you’re right under that presumption, either.
If you consider a small team working on this, developing the language seriously, earnestly, but as a means to an end on the side, I can totally see why they think it may be the best approach to develop the language fully internally. It's an iterative develop-as-you-go approach, you're writing a highly specific opinionated tool for your niche.
So maybe it's best to simply wait until engine + game are done, and they can (depending on the game's success) really devote focus and time on polishing language and compiler up, stabilizing a version 1.0 if you will, and "package" it in an appropriate manner.
Plus: they don't seem to be in the "promote a language for the language's sake" game; it doesn't seem to be about finding the perfect release date, with shiny mascot + discord server + full fledged stdlib + full documentation from day one, to then "hire" redditors and youtubers to spread the word and have an armada of newbie programmers use it to write games... they seem to much rather see it as creating a professional tool aimed at professional programmers, particularly in the domain of high performance compiled languages, particularly for games. People they are targeting will evaluate the language thoroughly when it's out, whether that's in 2019, 2025 or 2028. And whether they are top 10 in some popularity contest or not, I just don't think they're playing by such metrics. The right people will check it out once it's out, I'm sure. And whether such a language will be used or not, will probably, hopefully even, not depend on finding the most hyped point in time to release it.
I do not subscribe to that idea because with RAII you can still have batched drops, the only difference between the two defaults is that with defer the failure mode is leaks, while with RAII the failure mode is more code than you'd otherwise would have.
Sometimes nobody else shares the opinion and the “abrasive person” is both good-hearted and right in their belief: https://en.m.wikipedia.org/wiki/Ignaz_Semmelweis
Personally, I’d rather be the kind of person who could have evaluated Semmelweis’s claims dispassionately rather than one who reflexively wrote him off because he was strident in his opinions. Doctors of the second type tragically shortened the lives of those under their care!
Uh, yes. When was software better (like when was America great)? Do you remember what Windows and Linux and MacOS were like in 90s? What exactly is the software we are comparing?
> There's a great rant about Visual Studio debugger
Yeah, I'm not sure these are "great rants" as you say. Most are about how software with different constraints than video games aren't made with same constraints as video games. Can you believe it?
And in case you somehow thinks I am against you. I am merely pointing out what happened between 2013 - 2023. I believe you were also one of the only few on HN who fought against it.
Modern software is indeed slow especially when you consider how fast modern hardware is.
If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
Not exactly a surprise? Microsoft made a choice to move to C# and the code was slower? Says precious little about software in general and much more about the constraints of modern development.
> If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
This reasoning is bonkers. Compare vastly different software with a vastly different design center to something only in the same vague class of systems?
If the question is "Is software getting worse or better?", doesn't it make more sense to compare newer software to the same old software? Again -- do you remember what Windows and Linux and MacOS were like in 90s? Do you not believe they have improved?
A lot of things being open sourced are using open source as a marketing ploy. I'm somewhat glad that jai is being developed this way - it's as opinionated as it can be, and with the promise to open source it after completion, i feel it is sufficient.
There are positives and negatives to it, I'm not naive to the way the world works. People have free speech and the right to criticise the language, with or without access to the compiler and toolchain itself, you will never stop the tide of crazy.
I personally believe that you can do opensource with strong stewardship even in the face of lunacy, the sqlite contributions policy is a very good example of handling this.
Closed or open, Blow will do what he wants. Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
Maybe my definition is bad though.
Yes, yes I do.
Since then the computer have become several orders of magnitude more powerful. You cannot even begin to imagine how fast and powerful our machines are.
And yet nearly everything is barely capable of minimally functioning. Everything is riddled with loading screens, lost inputs, freeze frames and janky scrolling etc. etc. Even OS-level and professional software.
I now have a AMD Ryzen 9 9950X3D CPU, GeForce RTX 5090 GPU, DDR5 6000MHz RAM and M.2 NVME disks. I should not even see any loading screen, or any operation taking longer than a second. And yet even Explorer manages to spend seconds before showing contents of some directories.
All the "public advertisement" he's done was a few early presentations of some ideas and then ... just live streaming his work
What other kind of optimisations are you imagining? I'm not talking about a particular "kind" of optimisation but the entire category. Lets look at two real world optimisations from opposite ends of the scale to see:
1. Peephole removal of null sequences. This is a very easy optimisation, if we're going to do X and then do opposite-of-X we can do neither and have the same outcome which is typically smaller and faster. For example on a simple stack machine pushing register R10 and then popping R10 achieves nothing, so we can remove both of these steps from the resulting program.
BUT if we've defined everything this can't work because it means we're no longer touching the stack here, so a language will often not define such things at all (e.g. not even mentioning the existence of a "stack") and thus permit this optimisation.
2. Idiom recognition of population count. The compiler can analyse some function you've written and conclude that it's actually trying to count all the set bits in a value, but many modern CPUs have a dedicated instruction for that, so, the compiler can simply emit that CPU instruction where you call your function.
BUT You wrote this whole complicated function, if we've defined everything then all the fine details of your function must be reproduced, there must be a function call, maybe you make some temporary accumulator, you test and increment in a loop -- all defined, so such an optimisation would be impossible.
You'd almost end up with two languages in one. It would be interesting to see a language fully embrace that, with fast/slow language dialects which have very good interoperability. The complexity cost would be high, but if the alternative is learning two languages rather than one...
If it's a persona, then he's at best a performer and entertainer pandering to an audience that enjoys or relates to immature, insufferable people. If it isn't a persona, then he's just an immature, insufferable person.
No, thank you. Either way, the result is psychologically, socially, and politically corrosive and typically attracts a horrendous, overall obnoxious audience.
But hey that could be nostalgia, right? We can't run win xp in today's world. Not is it recommend with lots of software ot being supported on win xp.
The same is case for Android. Android 4 has decent performance, then android 5 came and single handedly reduced performance and battery life. And again you can't go back due to newer apps no longer supporting old android version.
This is also seen with apple where newer os version is painful on older devices.
So, on what basis do you fairly say that "modern apps are slow"? That's why I say to use faster software as reference. I have linux and windows dual boot on same machine. An dthen difference in performance is night and day
I'm sure having to remember to free resources manually has caused so much grief, that they decided to come up with RAII, so an object going out of scope (either on the stack, or its owning object getting destroyed) would clean up its resources.
Compared to a lot of low-level people, I don't hate garbage collection either, with a lot of implementations reducing to pointer bumping for allocation, which is an equivalent behavior to these super-fast temporary arenas, with the caveat that once you run out of memory, the GC cleans up and defragments your heap.
If for some reason, you manage to throw away the memory you allocated before the GC comes along, all that memory becomes junk at zero cost, with the mark-and-sweep algorithm not even having to look at it.
I'm not claiming either GC or RAII are faultless, but throwing up your hands in the air and going back to 1970s methods is not a good solution imo.
That being said, I happen to find a lot that's good about Jai as well, which I'm not going to go into detail about.
We're debating made up stuff here. The reality is all in our collective heads.
Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
Less with GraalVM native-image:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
yes, I referred to benchmarks with large memory consumption, where Java still uses from 2 to 10(as in binary tree task) more memory, which is large overhead.
All of these examples are Microsoft is not building X as well as it used to, which is entirely possible. However, Microsoft choosing to move languages says something entirely different to me than simply -- software somehow got worse. It says to me that devs weren't using C++ effectively. It says to me that a tradeoff was made re: raw performance for more flexibility and features. No one sets out to make slow software. Microsoft made a choice. At least think about why that might be.
Then you're not comparing old and new software. You're comparing apples and oranges. Neovim is comparable to VS Code in only the most superficial terms.
> Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I outlined some reasons why I think it would, and I think there's good precedent for that.
> the choice is not ours to make
I never said it was.
> People have free speech
I don't think I argued people don't have free speech? This is an easily defensible red herring to throw out, but it's irrelevant. People can say whatever they want on any forum, regardless of the projects openness. I am merely suggesting people are less inclined to shit on a battle-tested language than a young, mold-able one.
The average machine a person directly interacts with is a phone or TV at this point, both of which have major BoM restrictions and high pixel density displays. Memory is the primary determination of performance in such environments.
On desktops and servers, CPU performance is bottlenecked on memory - garbage collection isn't necessarily a problem there, but the nature of separate allocations and pointer chasing is.
On battery, garbage collection costs significant power and so it gets deferred (at least for full collections) until it's unavoidable. In practice this means that a large amount of heap space is "dead", which costs memory.
Your language sounds interesting - I've always thought that it would be cool to have a language where generational GC was exposed to the programmer. If you have a server, you can have one new generation arena per request with a write barrier for incoming references from the old generation to the new. Then you could perform young GC after every request, only paying for traversal+move of objects that survived.
they show an adoration for C, and they both hate C++, yet they chose C++ for their compiler, go figure
> C4 differentiates itself from other generational garbage collectors by supporting simultaneous-generational con- currency: the different generations are collected using concurrent (non stop-the-world) mechanisms
Well, goto also eliminates the "need" but language features are about making life easier, and life is much easier with RAII compared to having only defer.
Jai similarly is hard for IDEs, but has much more depth and power.
While Zig has momentum, it will need to solidify it to become mainstream, or Jai has a good chance of disrupting Zig’s popularity. Basically Zig is Jai but minus A LOT of features, while being more verbose and annoyingly strict about things.
Odin on the other hand has no compile time and in general has different solutions compared to Zig & Jai with its rich set of builtin types and IDE friendliness.
And finally C3 which is for people who want the familiarity of C with improvement but still IDE friendliness with limited metaprogramming. This language is also less of an overlap with Jai than Zig is.
However, C++ lambdas don't keep the parent evironment alive, so if you capture a local variable by reference and call the lambda outside the original function environment, you have a dangling reference and get a crash.
We have far more isolation between software, we have cryptography that would have been impractical to compute decades ago, and it’s used at rest and on the wire. All that comes at significant cost. It might only be a few percent of performance on modern systems, and therefore easy to justify, but it would have been a higher percentage a few decades ago.
Another thing that’s not considered is the scale of data. Yes software is slower, but it’s processing more data. A video file now might be 4K, where decades ago it may have been 240p. It’s probably also far more compressed today to ensure that the file size growth wasn’t entirely linear. The simple act of replaying a video takes far more processing than it did before.
Lastly, the focus on dynamic languages is often either misinformed or purposefully misleading. LLM training is often done in Python and it’s some of the most performance sensitive work being done at the moment. Of course that’s because the actual training isn’t executing in a Python VM. The same is true for so much of “dynamic languages” though, the heavy lifting is done elsewhere and the actual performance benefits of rewriting the Python bit to C++ or something would often be minimal. This does vary of course, but it’s not something I see acknowledged in these overly simplified arguments.
Requirements have changed, software has to do far more, and we’re kidding ourselves if we think it’s comparable. That’s not to say we shouldn’t reduce wastage, we should! But to dismiss modern software engineering because of dynamic languages etc is naive.
It says that "our computers are thousands of times faster and more powerful than computers from the 90s and early 2000s" and yet somehow "flexibility and features" destroy all of those advancements.
And no, it's not just Microsoft.
Oh no. It can be compared in more than superficial terms. E.g. their team struggled to create a performant terminal in VS Code. Because the tech they chose (and the tech a lot of the world is using) is incapable of outputting text to the screen fast enough. Where "fast enough" is "with minimal acceptable speed which is still hundreds of times slower than a modern machine is capable of": https://code.visualstudio.com/blogs/2017/10/03/terminal-rend...
So you can allocate resource in one function, then move the object across function boundaries, module boundaries, into another object etc. and in the end the resource will be released exactly once when the final object is destroyed. No need to remember in each of these places along the path to release the resource explicitly if there's an error (through defer or otherwise).
WTF are you talking about? Neovim doesn't implement a terminal?
In Jai/Odin, every scope has default global and temp allocators, there's nothing stopping you from transferring ownership and/or passing pointers down the callstack. Then you either free in the last scope where the pointer lives or you pick a natural lifetime near the top of the callstack, defer clear temp there, and forget about it.
Regardless of comptime, Odin and C3's public accessibility, and being close enough to Jai for folks to contemplate switching over, will eat at its public mind share. In both cases (be it Zig or Odin/C3), the longer that Jai keeps making the mistake of avoiding a full public release, the more it appears to be hurting itself. In fact, many would argue, that a bit of Jai's "shine" has already worn off. There are now many alternative languages out here, that have already been heavily influenced by it.
Jon is not going to stop public reaction nor will Jai be perfect, regardless of when he releases. At least releasing sooner, allows it to keep momentum. Not just generated by him, but by third parties, such as books and videos on it. Maybe that's where Jon is making a mistake. Not allowing others to help generate momentum.
Or you leak the resource.
Again this person has no trouble understanding the BC, it has trouble with the outcome of satisfying the BC. Also this person is writing Vulkan code, so intelligence is not a problem.
> is quite common and widely understood
This is an opinion expressed in a bubble, which does not in any-way disprove that the reverse is also expressed in another bubble.
This is the brain worm.
I agree with Blow and Muratori, et al., re: design patterns which are slow (aggressive OOP), and both are right about the solution (DOD). I agree that GCed languages are slower than minimal runtime non-GCed languages. Blow and Muratori are entirely right about the particulars of these tradeoffs, re: video games, their own software domain. But that's also why it's so frustrating that they don't understand that other software has different tradeoffs.
"Why is this new terminal not so fast?" is a fine question, but the answer is usually something like: Programming is an economic activity with tradeoffs, and, because that's true, perhaps the terminal didn't need to be that fast (by spending the marginal number of hours to make it faster)?
In the past, John Carmack and Michael Abrash did amazing things with the Pentium chip and x86 assembly, because those were the constraints on their problem. But don't forget that they didn't have the kind of constraints software has now. Memory safety and security being one important constraint they didn't really worry about unless that memory unsafety crashed the program. They didn't have distributed systems problems. They didn't have to ship their product to the cloud or the web (although they later did with Quake Live). Games are super interesting and complex, but so are distributed databases.
So -- I'd argue -- it is no less amazing to build the foundational software of the web. Don't kid yourself, these are incredibly hard problems, and it's perfectly okay that such software is not written in C/C++ for all the well known reasons C/C++ has problems in these domains. The reason this argument is such a muddle is because it may not have been reasonable to build some of things we've built without a GC, and that's okay.
From my own experience, I am working on a fuzzy finder in Rust, and let me tell you `fzf`, written in golang, is really hairy competition, because it has loads of features people really want and it doesn't have to worry about half the things I still need to worry about. Is junegunn a "bad" programmer and fzf a "worse" program, because my program is faster in a few arbitrary benchmarks? No, not at all. `fzf` has dozens of features we don't have and is fast enough for its domain, even where the domain expectation is non-GCed tools.