People used to compare it as simpler than Rust. I don't agree that it's simple anymore at all.
None of this is meant to be badmouthing or insulting. I'm a polyglot but love simple languages and syntaxes, so I tend to overly notice such things.
They’re not rushing, that’s for sure. But I’ve never felt worried about 1.0 never happening in an unending pursuit of unrealistic impossible ideals.
So where is Zig's OS, browser, docker, engine, security, whatever XYZ, that would make having Zig on the toolbox a requirement?
I don't see Bun nor Tiger Beetle being that app.
I think the main big thing that’s left for 1.0 is to resurrect async/await.. and that’s a huge thing because arguably very few if any language has gotten that truly right.
As the PR description mentions: “This is part of a series of changes leading up to "I/O as an Interface" and Async/Await Resurrection.”
So this work is partially related to getting async/await right. And getting IO right is a very important part of that.
I think it’s a good idea for Zig to try to avoid a Python 3 situation after they reach 1.0. The project seems fairly focused to me, but they’re trying to solve some difficult problems. And they spend more time working on the compiler and compiler infrastructure than other languages, which is also good. Working on their own backend is actually critical for the language itself, because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM
Programming languages which do get used are always in flux, for good reason - python is still undergoing major changes (free-threading, immutability, and others), and I'm grateful for it.
I tend to fall into the former camp. Something like BF would be the ultimate simple language, even if not particularly useful.
Also found that these interfaces only cause problems for performance and flexibility in rust so didn’t even look at them in zig.
https://www.reddit.com/r/Zig/comments/1d66gtp/comment/l6umbt...
Rust didn’t even have async await at that time
The fact that another breaking change has been introduced confirms my suspicion that Zig is not ready for primetime.
My conclusion is to just use C. For low-level programming it's very hard to improve on C. There is not likely to be any killer feature that some other contender will allow you to write the same code in a fifth of the lines nor make the code any more understandable.
Yes, C may have its quirky behaviour that people gnash their teeth over. But ultimately, it's not that bad.
If you want to use a better C, use C++. C++ is perfectly fine for using with microcontrollers, for example. Now get back to work!
When you break things regularly, you're forcing a choice on every individual package in the ecosystem: move forward, and leave the old users behind, or stay behind, and risk that the rest of the ecosystem moves forward without you. Now you've got a whole ecosystem in a prisoner's dilemma. For an individual, maybe you can make a choice and dig in and make your way along without too much trouble. But the ecosystem as a whole can't, the ecosystem fractures, and if it doesn't converge on the latest version, it slowly withers and dies.
I still think what drives languages to continuously make changes is the focus on developer UX, or at least the intent to make it better. So, PLs with more developers will always keep evolving.
JangaFX stuff is written in Odin and has some pretty big users.
Andrew’s design decisions in the language have always been impeccable. I’ve never seen him put a foot wrong and would have made the same change myself.
This is also not new to us, Andrew spoke about this at Systems Distributed ‘25.
Also, TigerBeetle has and owns its own IO stack in any event, and we’ve always been careful to use stable language features.
But regardless, it’s in our nature to “do the right thing”, even if that means a bit of change. We call this “Edge” and explicitly hire for people who have the same characteristic, the craftspeople who know how to spot great technical quality, regardless of how young (or old!) a project may be.
Finally, I’ve been in Zig since 2018. I wouldn’t exactly call it “shiny new”. Zig already has the highest quality toolchain and std lib of anything I would use.
Interesting. I like Zig. I dabble periodically. I’m hoping that maturity and our next generation ag tech device in a few years might intersect.
Throwing another colored function debacle in a language, replete with yet another round of the familiar but defined slightly differently keywords, would be a big turn off for me. I don’t even know if Grand Central Dispatch counts, but it—and of course Elixir/Erlang—are the only two “on beyond closures/callbacks” asynch system I’ve found worked well.
Huh, it was the 0.14 version number for me.
I also have to disagree with C++ for micro controllers / bare metal programming. You don't get the standard library so you're missing out on most features that make C++ worthwhile over C. Sure you get namespaces, constexpr and templates but without any standard types you'll have to build a lot on your own just to start out with.
I recently switched to Rust for a bare metal project and while its not perfect I get a lot more "high level" features than with C or C++.
Interesting, who designed the old Zig IO stack which alas Andrew needed to replace?
Why is that? Sure, allocating containers and other exception-throwing facilities are a no-go but the stdlib still contains a lot of useful and usable stuff like <type_traits>, <utility>, <source_location>, <bit>, <optional>, <coroutine> [1] and so on
[1] yes they allocate, but operator new can easily be overridden for the promise class and can get the coro function arguments forwarded to it. For example if coro function takes a "Foo &foo", you can have operator new return foo.m_Buffer (and -fno-exceptions gets rid of unwinding code gen)
Vendors at this point seem to give their implementation of some of the std library components, but the one's I've seen were lacking in terms of features.
"Software is just like lasagna. It has many layers, and it tastes best after you let it sit for a while".
I still follow this principle years down the line and avoid introducing shiny new things on my projects.
Everyday, more and more people started using that bridge.
In 2025, I've rebuilt the bridge twice as big to accommodate the demand of a growing community.
It's great and the people love it!
let him cook
This distinction makes it really comfortable to use.
Though one caveat about no_std is that you'll need some support library like https://docs.rs/cortex-m-rt/latest/cortex_m_rt/
My couple of days experience with Zig was very lackluster with the std lib, not that it is bad, but feels like it is lacking a lot of bare essentials. To be expected for a new pre-1.0 language of course.
And in the end, things do improve significantly.
In this case, I think the new IO stuff is incredible.
Wait till the SD25 talk on this comes out, to first understand the rationale a bit better!
I think you'll enjoy Andrew's talk on this too when it comes out in the next few weeks.
The velocity of Zig has been valuable for us. Being able to put things like io_uring or @prefetch in the std lib or language, and having them merged quickly. Zig has been so solid, even with all the fuzzing we do. It's really held up, and upgrades across versions have not been much work, only a pleasure.
The point was that if he did the old design, which needed improving enough to justify breaking the language backwards compatibility, then why say his decisions are impeccable? Pobody's nerfect.
If you want stability, stick to stuff that has stability guarantees, but at the very least let them make breaking changes during development.
Again, we use Zig, and this change is welcome for us.
We also like that Zig is able to break backwards compatibility, and are fully signed up for that.
The crucial thing for TigerBeetle is that Zig as language will make the right calls looking to the next few decades, rather than ossify for fear of people who don't use it.
In C the freestanding environment doesn't provide any concrete features, you don't get any functions at all, you can get a bunch of useful constants such as the value of Pi or the maximum value that will fit in an unsigned integer, some typedefs, that's about it. Concrete stuff from the "C standard library" is not available, for example it does not provide any sort of in-place sort algorithm, or a way to compare whether two things are the same (if they fit in a primitive you can use the equality operator)
In C++ there are concrete functions provided by the language standard in freestanding mode. These, together with definitions for types etc. form the freestanding version of the "standard library" in C++. There's a long period where this was basically untended, it wasn't removed but it also wasn't tracking new features or feedback. In the last few C++ versions that improved, but even if you have a new enough compiler and it's fully compliant (most are not) there's still not always a rhyme or reason to what is or is not available.
In Rust it's really easy. You always have core, if you've got a heap allocator of some sort you can have alloc, and if there's a whole operating system it provides std.
In most cases a whole type lives entirely in one of those modules, Duration for example lives in core. Maybe your $5 device has no idea which year this is, let alone day but it does definitely know 60 seconds is a minute.
But in some cases modules extend a type. For example arrays exist in core of course - an array of sixty Doodads where Doodads claim to be Totally Ordered, can just be unstably sorted, that works. But, what if we want a stable sort, so that if two equal Doodads were arranged A, B they are not reversed B, A ? Well Rust's core module doesn't provide a stable sort, the stable sort provided uses an allocation and so the entire function you need just doesn't exist unless you've got allocators.
Also, "Zig the language" is currently better designed than "Zig the stdlib", so breaking changes will actually be needed in the future at least in the stdlib because getting it right the first time is very unlikely, and I don't like to be stuck with bad initial design decisions which then can't be fixed for decades (again, as a perfect example of how not to do it, see C++)
If your micro controller is say <5000 lines maybe ... but an OS or a mellanox verbs or dpdk API won't fall so easily to such surface level thinking.
Maybe zig could help itself by providing through llvm what Google sometimes does for large api breaking changes ... have llvm tool that searches out old api invocation update to new so upgrading is faster, more operationally effective.
Google's tools do this and give the dev a source code pr candidate. That's how they can change zillions of calls with confidence.
At some point people just want their code to work so they go back to something that just works and won't break in a few years.
I hope that the Zig team invests more into helping with migration than they have in the past. My experience for past breaking changes is that downstream developers got left in the cold without clear guidance about how to fix breaking changes.
In Zig 0.12.0 (released just a year ago), there were a lot of breaking changes to the build system that the release notes didn't explain at all. To see what I mean, look at the changes I had to make[0] in a Zig 0.11.0 project and then search the release notes[1] for guidance on those changes. Most of the breaking changes aren't even mentioned, much less explained how to migrate from 0.11.0 to 0.12.0.
>Some of you may die, but that is a sacrifice I am willing to make.
>-Lord Farquaad
[0] https://github.com/mtlynch/zenith/pull/90/files#diff-f87bb35...
> This is roughly analogous to Rust's nostd.
"freestanding" is actually worse that this. It means that the compiler can't even assume things about memcpy and optimize it out (as on gcc it implies -fno-builtin), which pessimizes a lot of idiomatic code (eg. serialization).
The "-nostdlib" option is usually what one wants in many cases (don't link against libc but still provide standard C and C++ headers), such as when compiling privileged code with -mgeneral-regs only and such. This way you can benefit from <chrono>, etc.
If you are writing userland code you should be using a toolchain for this, instead of relying of freestanding/nostdlib which are geared towards kernel code and towards working around defective toolchains.
const pick_a_global_io = ...;
fn needs_io(io:IO) void {...}
fn doesnt_take_io() void {
needs_io(pick_a_global_io);
}
easy peasy. you've resolved the coloring boundary.now, if you want to be a library writer, yeah, you have to color your functions if you don't want to be an asshole, but for the 95% use case this is not function coloring.
There is also devkitPPC, shipping with the same toolchain (and which additionally has some Obj-C support iirc).
Custom patches to newlib and consorts (https://github.com/devkitPro/buildscripts/) introduce hooks and WEAK functions that allow to implement standard library functions on almost any platform, on a platform library basis or even on a per-program basis (with some restrictions on lock sizes).
a few things have been removed, too. and async/suspend/nosuspend/await, usingnamesplace are headed for the woodchipper.
Building our own types was a rite of passage for C++ programming back in the early 1990's, and university curriculums for C++ as well.
In my specific case I was trying to send some DNS messages. I went the route of linking libc and using the posix data structures for DNS messages and struggled quite a bit how to map the C data structures to my program.
This kind of thing is a big barrier to adoption unfortunately.
A language, in special, should be able to do it. Extreme compatibility is the way to make the mistake that is C.
A breaking change that fix something is a investing that extend infinity to the feature.
Fear to do it, like in C, is how you accumulate errors, mistakes, millions of dollars wasted, because is also compound debt.
P.D: I think langs should be fast to break pre 1.0, and maybe have room to do it each 5/7 years. Despite the debacle of Python (that is in fact more reflective of python than of breaking), there should be possible to make a relatively painless sunsetting with good caring
I find the `Reader.stream(writer, limit)` and `Reader.streamRemaining(writer)` functions to be especially elegant to build a push-based data transformation pipeline (like GREP or compression/encryption). You just implement a Writer interface for your state machine and dump the output into another Writer and you don't have to care about how the bytes come and how they leave (be it a socket or shared memory or file) -- you just set the buffer sizes (which you can even set to zero as I gather!)
`Writer.sendFile()` is also nice, I don't know of any other stream abstraction that provides this primitive in the "generic interface", you usually have to downcast the stream to a "FileStream" and work on the file descriptor directly.
What bothers me with C/C++ is how difficult it is to cross compile a simple Windows + SDL app from inside WSL without MSVC installed.
I've spent weeks on this.
If Zig saves me from that nightmare, and still lets me use C++ libraries, I will gladly switch over to it.
...you don't even need to port anything in your C/C++ project to Zig, just integrate `zig cc` as C/C++ compiler into your existing build system, or port your build system files to build.zig.
I like Zig, but I'm waiting for it to become somewhat stable, because the amount of breaking changes feels pretty significant. I suppose that's the price of progress.
I think the only way to follow a new (unstable) language is to join whatever community where the conversation happens; otherwise, what you think you know about the language will become outdated pretty quickly.
I maintain the zigler library, and one thing that was useful about the old async "colored-but-not-really" functions was that they implicitly tolerate having internal suspend points (detail: https://www.youtube.com/watch?v=lDfjdGva3NE&t=1819s) -- I'm not sure if having IO be a passed parameter will still let me do that? Can users build their own functions with yield points? And will you be able to jump out of the frame of a function and give control back to the executor, to let it resume later?
As you're aware, that feature of the language ("stackless coroutines", "generators", "rewriting function logic into a state machine") was regressed. At first, this new IO interface won't have that capability.
However, as a followup issue, I'd like to reintroduce it, potentially in more low-level manner, for use inside IO implementations. Combined with restricted function pointers, this will allow functions that can suspend to pass through runtime-known function pointer boundaries - something that was terribly clunky before to the point that it compromised the entire design. This means that, again, the same IO interface usage code will be able to be reused, including when the implementation uses suspend points, and the automatic calling convention rewriting will be able to propagate through the interface into the usage code.
The issue to track is: https://github.com/ziglang/zig/issues/23446
I'll add that I'm still keen on the previous suspend/resume keywords and semantics as a solution to this issue.
Here is the commit where Reader/Writer was introduced: https://github.com/ziglang/zig/commit/5e212db29cf9e2c06aba36...
This is a few months after `git init`. You can see I was really just working on the parser, with a toy example to get things started.
Over time, I merged contributions that made minor changes and shuffled things around, and these APIs evolved to kind of work okay. But nobody really considered "the Zig IO stack" as a whole and put in design effort. That is happening for the first time right now.
This is how programming languages are constructed. Things evolve slowly over time, and periodically you have to reevaluate things and do major reworkings.
As an aside, do you think in the near future there will be a "guide to building a compiler backend" either in-project or by the community?
Does the new change make it easier to store reader/writer in a struct?
Fast forward a few decades to today and the best solution to cross-compile C/C++ projects is the Zig toolchain (and isn't that kinda weird? A "foreign" toolchain coming along fixing one of the biggest problems in the C/C++ ecosystem just like that as a "side quest"?)
Point being, I feel like a lot of the gripes about zig changing here and there are really by folks who aren't really daily users, just people who see a !!breaking change!! announcement and pile on.
Though, I do sympathize with newcomers because the memory of the internet is pinned to various older versions with lots of demo code that 'just doesn't work' and of course that means LLMs too, in the long run. Hopefully zig doesn't get stuck past the global knowledge/popularity LLM-cycle cutoff. I don't think it will.
Let's take this change as an example. If I already wrote a program that used the old apis and meets my needs what is the benefit of this change for me? Now I have to go back and rewrite my old code and I might introduce a new bug in the migration, especially if I don't understand all of the nuance in the difference between the apis. Even if I agree that the new apis are better, the cost of migration might outweigh the benefits, but I am forced in to the migration or forking the compiler, which both might be bad choices for me.
It is not necessary to do this. They could, for example, have versioned stdlib and then maybe all I need to is pin my stdlib version. One complaint is that having multiple standard libraries causes more maintenance burden, but why should that be the primary concern? What is the cost to the maintainer vs. the cost to the community. If you make 1000 users spend an hour migrating their code, are you really going to save 1000 hours of maintenance burden?
Moreover, if the zig team wrote code with the assumption that they can never get rid of it, perhaps they wouldn't make so many design mistakes that all of these breaking changes become inevitable.
If I wrote a program in zig, I would feel obligated to also learn how to bootstrap it so that I wouldn't be subject to unwanted breaking changes. But then I see that bootstrapping involves some bizarre wasm hack instead of the tried and true bootstrapping approach of writing a simple first pass compiler in c (or some other language) and I am reminded again why I would never choose this language.
are you sure?
...I think the only thing that's aggressively marketed is that the Zig team isn't afraid of big and controversial breaking changes ;) If you can't handle that, then Zig currently isn't right for you, it's as simple as that.
> They could, for example, have versioned stdlib and then maybe all I need to is pin my stdlib version.
That really only makes sense for after 1.0, and even after that only for stdlib APIs that are out of the experimental phase.
But post 1.x some sort of migration support for breaking changes would indeed be much more useful than trying to prevent breaking changes at all cost.
> and I am reminded again why I would never choose this language.
...then why even write such a lengthy comment? Just ignore Zig and move on... it's not like there's plenty of other languages which might better fit your taste.
Looking over the changes, they seem wise and well justified. Fixing my old codebases will be annoying, but I don't mind the annoyance if a better language comes out the other end.
The hack allows the compiler to be maintained in Zig, compiled to WASM (a supported backend), and then bootstrapped with only a wasm interpreter… and one is provided in a single C file (I believe… but haven’t looked in a while).
This is a much nicer situation than most other bootstrap scenarios. All the SMLs, for instance, require you to have a whole other SML! MoscowML bootstraps with an included Caml interpreter… but it’s not sufficient to compile MLton.
Even better if it can be done with a deterministic codemod, but a prompt is easier to write.
When someone creates something others want, it'll inevitably become popular.
If your app is stable, could you not keep using the version you're happy with?
We build for Linux and Windows on Linux using gcc/mingw and don't have any fundamental issues doing so. On macOS we need the headers & libraries for macOS, we have to do those inside a VM.
I'd be extremely surprised if you can cross-compile Zig for macOS on a non-macOS platform, unless it doesn't use any macOS native frameworks at any level.
From all that I've experienced in the past few weeks dealing with C projects and various build systems and operating systems, I suspect that using Zig would work perfectly as an easy cross-platform alternative to CMake. Until I open up my code in VS Code and the C/C++ plugin just doesn't work, no auto-completion, no go-to-definition, syntax highlighting is broken, etc., and all because it can't find the files in places it ordinarily expects them to be. And maybe there will be some hacky way to fix it with a setting for the VS Code plugin, but likely not.
I'm not saying this is the case, but literally none of the setups I tried feels non-hacky so far, and every one of them has at least one noticable problem during development. I truly miss the days of writing apps for a single platform using its own native build tools. Maybe that's what I'll do: write this as a native Windows app using Visual Studio (ugh, such an awful editor though) and then if I get sales, port it to Mac OS X 10 using Xcode.app, and compile it for Linux inside WSL with GCC 15.
\</rant>
In contrast, I tried to learn CMake after. Despite my gripes about the CMake language itself, I found it relatively straightforward to do everything I wanted. Docs, backwards-compatibility, and LLMs made it all easy to set up. I have a hybrid C++/Rust project that compiles to desktop/WASM with debug/release builds.
When the build system for Zig stabilizes I'm sure things will be better, but the breaking changes are rough based on my recent experience.
Zig.day looks like a wonderful way to do laid back hackathons. Looking forward to one coming up.
I think this should be a warning on debug builds and an error on release builds, but it's a relatively minor thing and not a deal breaker by any means.
If this is the worst thing that people would like to see revisited, Zig must be doing amazingly well.
In fact, it exists in a world which contains "an ecosystem", tooling, opinionated build systems, various incompatible compilers, and mountains of legacy baggage, and all of these influence the daily experience of programmers using the language.
(That link seems to show the "unused local variable" error line twice for me; that's some kind of bug with this zigbin service and does not reproduce when running the Zig compiler normally.)
It totally breaks my normal workflow. I don’t use zig at all because of this misfeature. Warn in debug and error on release builds would be strange but fine.
- ability to define anonymous functions without having to put it inside an anonymous struct. I get the argument against closures (even if I don't fully agree with it), but not having first class support for anonymous functions feels pretty regressive for a modern language
- have a way to include payload data with errors. Or at the very least, define an idiomatic pattern for handling cases where you have additional data for an error
- allow struct fields to be private
- bring back async support in some form (this one I do have some hope for)
I agree that this looks extremely breaking, and feel like providing no compatibility layer fragments community, especially when you might have some unmaintained dependency somewhere that will never be updated.
there were similar situations when in rust some core libraries decided to alter its interfaces.
but I believe rust approach with editions is brilliant and could be adapter to other languages like zig.