Most active commenters
  • pron(35)
  • littlestymaar(17)
  • logicchains(8)
  • dnautics(6)
  • cycloptic(6)
  • cmrdporcupine(6)
  • (6)
  • notacoward(5)
  • int_19h(5)
  • mwkaufma(5)

200 points jorangreef | 209 comments | | HN request time: 3.907s | source | bottom
1. pron ◴[] No.24292760[source]
I think that Zig's simplicity hides how revolutionary it is, both in design and in potential. It reminded me of my impression of Scheme when I first learned it over twenty years ago. You can learn the language in a day, but it takes a while to realize how exceptionally powerful it is. But it's not just its radical design that's interesting from an academic perspective; I also think that its practical goals align with mine. My primary programming language these days is C++, and Zig is the first low-level language that attempts to address all of the three main problems I see with it: language complexity, compilation speed, and safety.

In particular, it has two truly remarkable features that no other well-known low-level language -- C, C++, Ada, and Rust -- have or can ever have: lack of macros and lack of generics (and the associated concepts/typeclasses) [1]. These are very important features because they have a big impact on language complexity. Despite these features, Zig can do virtually everything those languages do with macros [2] and/or generics (including concepts/typeclasses), and with the same level of compile-time type safety and performance: their uses become natural applications of Zig's "superfeature" -- comptime.

Other languages -- like Nim, D, C++ and Rust also have a feature similar to Zig's comptime or are gradually getting there -- but what Zig noticed was that this simple feature makes several other complex and/or potentially harmful features redundant. Antoine de Saint-Exupery said that "perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." I think that Zig, like Scheme -- and yes, there are others -- is close to that minimalist vision of perfection.

What a truly inspiring language. Rather than asking how we could make C++'s general philosophy work better as another increasingly famous language does, IMO, it asks how we could reshape low-level programming in a way that's a more radical break with the past. I think it's a better question to ask. Now all there's left to hope for is that Zig gets to 1.0 and gains some traction. I, for one, would love to find a suitable alternative to C++, and I believe Zig is the first language that could achieve that in a way that suits my particular taste.

[1]: I guess C has the second feature, but it loses both expressivity and performance because of it.

[2]: Without the less desirable things people can do with macros.

replies(5): >>24293479 #>>24293660 #>>24294000 #>>24294005 #>>24312605 #
2. Kednicma ◴[] No.24292785[source]
I hope that we're not stuck writing piles of low-level code for all eternity. We don't need more than a few pages of each low-level language, and while I do really like Zig's qualities compared to C, I'd still like to minimize the amount of Zig or C total that has to be written.

I think that our community's equivalent of "where's my flying car?" is "where's my higher-level language?"

replies(6): >>24292855 #>>24293009 #>>24293085 #>>24293278 #>>24293542 #>>24297763 #
3. rvz ◴[] No.24292832[source]
At least its C FFI bindings story is much more pleasant than Rust's complicated swiss army knife of toggles and flags in bindgen.

The Rustaceans should be taking notes on this.

replies(1): >>24293030 #
4. enriquto ◴[] No.24292855[source]
> I think that our community's equivalent of "where's my flying car?" is "where's my higher-level language?"

Ηelicopters are flying cars and they are everywhere for you to use. But some people prefer to use a bicycle to commute to work, rather than an helicopter. I'd even say that most people would prefer to take a bicycle every day than an helicopter.

The same thing with lower level languages. Sometimes you do not want to be burdened by the limitations of a "high-level" language.

replies(1): >>24293021 #
5. notacoward ◴[] No.24293009[source]
I think that's a valid point that needs to be made in this conversation, and it's super-sad that somebody downvoted you for it.

I've spent much of my career writing low-level code in low-level languages because I had to, usually in C and usually because I was in resource-constrained environments where tight control over CPU and memory footprints was necessary. There's absolutely room for languages that improve programmers' lives in that kind of environment while remaining suitable to that purpose. I'd put Zig in that category, and I find much to admire in it.

However, outside that domain, once you have even a little freedom from those constraints, it makes no sense to use a language designed around them. When even something as simple as manipulating a few strings or updating objects in a map/hash/dictionary requires careful attention to avoid memory leaks or excessive copying, and your code is doing those things a lot, you're using the wrong language. A language that "protects" you by guiding you toward adding the right boilerplate in the right places honestly isn't much of a help. Most code should be written in a truly higher level language, where things like circular references don't require much discussion except by the language implementors. The problem of how to do that without going full-GC and having to deal with pauses is where people should focus their attention, not more languages that just change which ceremony you must adhere to.

replies(1): >>24293198 #
6. notacoward ◴[] No.24293021{3}[source]
> Sometimes you do not want to be burdened by the limitations of a "high-level" language.

I see very few people suffering from such burdens, but a great many suffering from its exact opposite: using a low- or mid-level language to write hundreds of lines where ten lines in a higher-level language would suffice and be more easily verified as correct.

replies(1): >>24294036 #
7. kreco ◴[] No.24293030[source]
Yeah, that's actually how any C FFI bindings should be.

Ans also, that's the strength of C and its simple ABI.

8. logicchains ◴[] No.24293046[source]
I work in HFT, and one of the key concerns when writing low-latency code is "is this code allocating memory, and if so, how can I stop it?" Zig is the perfect language for this use case as none of the standard library implicitly allocates, rather for anything that allocates, the caller must pass in an allocator. The stdlib also provides a handy arena allocator, which is often the best choice.

This is a huge advantage over C++ and Rust, because it makes it much harder for e.g. the intern to write code that repeatedly creates a vector or dynamically allocated string in a loop. Or to use something like std::unordered_map or std::deque that allocates wantonly.

replies(8): >>24293328 #>>24293382 #>>24293469 #>>24293919 #>>24293952 #>>24294403 #>>24294507 #>>24298257 #
9. pron ◴[] No.24293085[source]
Who's the "we" who are "stuck"? The vast majority of programmers don't use low-level languages for writing applications even today, but there is a big niche of domains where close to perfect control is needed and that's the domain low level languages like C, C++, Ada, Rust and Zig try to address. I wouldn't (and don't) write "ordinary" applications in those languages, but I don't think the domains they target will ever go away or become less important.
replies(2): >>24293221 #>>24293304 #
10. logicchains ◴[] No.24293198{3}[source]
It'd argue that the constraints of writing lower-level code can actually lead to producing better code. For instance, idiomatic C, Rust and Zig is to pass a buffer into some function, that it then fills, and the caller handles the output. This leads to having more pure, side-effect free functions, compared to the approach some higher-level languages will take of having the function allocate some stuff in its body, process it, maybe do some IO, and pass it to another function. They encourage "imperative shell, functional core", because of how difficult it is to manage memory/ownership if every random function is allocating promiscuously.

Low-level languages encourage the use of clean ownership patterns, which ultimately leads to cleaner design.

replies(1): >>24293295 #
11. notacoward ◴[] No.24293221{3}[source]
> The vast majority of programmers don't use low-level languages for writing applications

I work in a multi-million-line codebase, a significant majority of which is very far from that "need perfect control" domain but is written in a what I'd call a mid-level language - a high-abstraction dialect of C++. So I'd say GP is correct, that too many people are stuck writing code in the wrong language for the task at hand. The need for languages like Zig to improve the lower-level experience (which, as you say, is not going away) and the need for higher-level languages for more common types of programs are not in conflict. They're complementary. It's the mid-level languages that need to DIAF, because they're not really suited for either and pretending to be more general than they are only encourages people to make choices that hurt them.

replies(1): >>24293276 #
12. andi999 ◴[] No.24293266[source]
What is the best way to start with zig, if you know C by heart? I had a look a while ago, but I feel I miss out how the concepts are to be used in practice.
replies(2): >>24293497 #>>24294268 #
13. pron ◴[] No.24293276{4}[source]
I totally agree, and I think that C++ is exactly this "wrong kind of language," and Rust follows in its footsteps, but I'm sure others disagree. For example, for decades Microsoft has shown an attraction to this kind of languages (they love C++, C# is going down that path, and they're showing interest in Rust), so it might ultimately be a matter of taste -- a personal aesthetic preference -- unless somebody is ever able to make some empirical observations that show an objective benefit to one approach over another.
replies(1): >>24359314 #
14. flohofwoe ◴[] No.24293278[source]
Counterpoint: As long as CPU architectures don't radically change, high-level languages which are too far removed from how CPUs and memory currently work don't make a lot of sense as long as performance (or rather energy efficiency) is a concern. And this should be a much bigger concern than the current status quo. The "free lunch" is long over, we cannot rely on the hardware designers anymore to fix our inefficient high-level code.
15. tobz1000 ◴[] No.24293284[source]
Some of Zig's ideas fascinate me, both the great low-level concepts (e.g. arbitrary-sized ints), but much more than that, the high level concepts.

Particularly great is Zig's handling of both macros and generic types, the answer to both of which seems to be: just evaluate them at compile-time with regular functions, no special DSL or extra syntax. Andrew mentions in the video a big drawback of this system - implications for IDE complexity and performance. I imagine the performance side of this could be (maybe is?) mitigated by limiting recursion depth/loop counts for compile-time work.

I'm not particularly interested in taking on a language with manual memory management and the responsibilities it entails, but I would love to have access to Zig's compile-time capabilities, if it were available with some more memory safety.

replies(2): >>24293329 #>>24294235 #
16. notacoward ◴[] No.24293295{4}[source]
I have a lot of sympathy for that position. The thing that drives me nuts about the codebase I work in now is that it has idioms that involve "promiscuous" memory allocation and then more idioms to fix/cover it up. I'm fine with languages that require completely manual memory allocation (spent most of my career in C). I'm also fine with languages that totally take that burden away. There are tradeoffs either way, but I've seen great code in both of those styles. What I'm not fine with is languages that solve half of the problem and force programmers to learn the brain-dead rules of that half-solution to do their part. STL-heavy C++17 and beyond is, of course, the salient example. I've seen "clever" code in that style, but never great code.
replies(1): >>24293326 #
17. pjmlp ◴[] No.24293304{3}[source]
For me the way to go are languages that go all the way, C++, Ada, Object Pascal, even .NET and Java could fit into it, if the low level story ever gets straight. Currently AOT like .NET Native seems on the right path to achieve it.

Burroughs, Mesa, Modula, Oberon, Interlisp-D,... were on the right path, but in technology not always the best ideas win.

18. logicchains ◴[] No.24293326{5}[source]
>STL-heavy C++17 and beyond is, of course, the salient example. I've seen "clever" code in that style, but never great code.

Maybe the authors of that code just had different priorities, such as maximising compilation time.

replies(1): >>24293357 #
19. AsyncAwait ◴[] No.24293328[source]
> This is a huge advantage over C++ and Rust, because it makes it much harder for e.g. the intern to write code that repeatedly creates a vector or dynamically allocated string in a loop. Or to use something like std::unordered_map or std::deque that allocates wantonly.

True. On the other hand, Zig makes a deliberate decision not to bother itself with memory safety too much, so its a win some, loose some sort of situation.

replies(1): >>24293466 #
20. pron ◴[] No.24293329[source]
Zig gives you memory safety (or, rather, will ultimately do that), but it does so in a way that's different from both languages with garbage collection (whether tracing or reference-counting) or with sound type-system guarantees a-la Rust. It does so with runtime checks that are turned on in development and testing and turned off -- either globally or per code unit -- in production. You lose soundness, but we don't have sound guarantees for functional correctness, anyway, and given that Zig makes testing very easy, it's unclear whether a particular approach dominates the other in terms of correctness.
replies(6): >>24293512 #>>24293563 #>>24293661 #>>24296835 #>>24298380 #>>24299940 #
21. notacoward ◴[] No.24293357{6}[source]
Hah! Yeah, that does seem sometimes like it must have been a deliberate goal, doesn't it? Got a good laugh out of that. The number of cycles wasted by C++ compilers fumbling toward solutions that would have been obvious in a better language (thousands of server-years per day where I work) is a legitimate ecological concern.
replies(1): >>24296893 #
22. petr_tik ◴[] No.24293382[source]
how often does it happen that your interns work on the hot path of your trading systems, which is where I assume you care the most about avoid syscalls like malloc?
replies(3): >>24293588 #>>24293861 #>>24293926 #
23. lokl ◴[] No.24293396[source]
Zig is appealing to me, but I wonder whether time spent mastering Zig would be better spent mastering C.
replies(4): >>24293635 #>>24293667 #>>24294737 #>>24296869 #
24. pron ◴[] No.24293466{3}[source]
> On the other hand, Zig makes a deliberate decision not to bother itself with memory safety too much

This is not true. Zig places a strong emphasis on memory safety, it just does so in a way that's very different from either Java's or Rust's. I wrote more about this here: https://news.ycombinator.com/item?id=24293329

replies(2): >>24293966 #>>24295336 #
25. voldacar ◴[] No.24293469[source]
Yeah when I heard about this I instantly thought of game engines, but it makes total sense for HFT too. "Modern C++", with all its constant little mallocs and frees is so awful for anything that requires ultra low latency
replies(3): >>24293652 #>>24299417 #>>24300117 #
26. dom96 ◴[] No.24293479[source]
> Other languages -- like Nim, D, C++ and Rust also have a feature similar to Zig's comptime or are gradually getting there -- but what Zig noticed was that this simple feature makes several other complex and/or potentially harmful features redundant.

I'm curious where this impression of Zig comes from, as this is precisely what Nim has set out to do: a small core extensible via metaprogramming. Are there features that Nim implements which go against this premise? if so, what are they? :)

replies(2): >>24293531 #>>24293611 #
27. kristoff_it ◴[] No.24293497[source]
If you want to be eased into the language, start by checking out https://ziglearn.org. Otherwise just take a look at the overview on the homepage of https://ziglang.org, then the docs.

After that you should already be in great shape and you can read the standard library for examples of useful patterns.

replies(2): >>24295225 #>>24302354 #
28. renaicirc ◴[] No.24293512{3}[source]
> You lose soundness, but we don't have sound guarantees for functional correctness, anyway

This sounds like "we can't guarantee the most important thing, so it's unclear whether it's useful to guarantee this other thing," but that's a bizarre statement, so am I misinterpreting?

replies(1): >>24293642 #
29. ◴[] No.24293531{3}[source]
30. kristoff_it ◴[] No.24293542[source]
There are situations where precise control over what the machine is doing is very important and a few of the problems that we experience in high-level contexts are caused by the inadequacy of our lower level tools.

Give it a few years and everybody is going to benefit from better implemented "lower-level" applications, thanks to Zig and other languages attempting to do the same.

31. TinkersW ◴[] No.24293563{3}[source]
That is the same approach used in many C++ projects
replies(2): >>24293585 #>>24293655 #
32. renaicirc ◴[] No.24293585{4}[source]
Then I guess the obvious question is: has it worked well for those projects?
replies(1): >>24293840 #
33. cycloptic ◴[] No.24293588{3}[source]
Nit pick: On modern systems malloc isn't a syscall, it's implemented in userspace. (Sorry, I couldn't help it)

That's not to say you're safe to call other syscalls, many of them either require memory allocations in-kernel (see ENOMEM) or can block indefinitely.

replies(1): >>24293828 #
34. pron ◴[] No.24293611{3}[source]
> Are there features that Nim implements which go against this premise? if so, what are they? :)

Nim has generics (plus concepts), templates, and macros. Zig has just comptime, through which it achieves all those goals (minus some macro capabilities that it deems harmful anyway) with just one, very simple, cohesive construct. You could argue on whether you like this or not, but you can't argue that Zig's approach isn't fundamentally more minimal. Zig is a language you can reasonably fully learn in one day; I don't think you could say the same about Nim.

---

Also, note another remarkable feature. One could define a language called Zig' with the following properties:

1. Every well-formed Zig program is a well-formed Zig' program (i.e. Zig' accepts all Zig programs, potentially more).

2. Every Zig program has the same semantics as the identical program when interpreted in Zig'.

This means that to analyse the semantics of a Zig program you can pretend it's a Zig' program; in fact, you don't need to create an interpreter for Zig', you can just pretend it exists. Why would you want to do that? Because Zig' is simpler. How? Here's the kicker: Zig' ignores comptime completely. It is a very simple, optionally-typed, dynamic language with reflection.

In other words, to analyse the semantics of a Zig program you can forget about comptime and pretend it can do everything at runtime (and treat comptime as a pure semantics-preserving optimisation).

This is not true for languages with macros, as they are not "erasable".

replies(1): >>24294929 #
35. flohofwoe ◴[] No.24293635[source]
Why not both? Zig and C are both very simple languages, and there's not much to "master" TBH (at least not many language-specific things, so what you learn mostly transfers to other programming languages as well).
replies(1): >>24293690 #
36. pron ◴[] No.24293642{4}[source]
It means that there's a complex tradeoff between making sound guarantees and providing correctness in other ways, a tradeoff that all languages make anyway, each finding its own preferred sweet spot, and that we don't know if, say, Rust's sweet spot yields better correctness than Zig's.
37. cycloptic ◴[] No.24293652{3}[source]
Can you explain how this is a problem in modern C++? I was under the impression that all the STL containers (string, vector, list, map, etc.) worked the same and have an allocator parameter. Are there other areas where these are missing? Or is the issue that STL implementations almost always default to an allocator that uses malloc? I'm not trying to dog on Zig here (it's a nice little language) but this just doesn't seem to be something that only Zig can do.
replies(3): >>24294240 #>>24294749 #>>24299554 #
38. pron ◴[] No.24293655{4}[source]
But to do that you can only use a subset of C++ (e.g. you can't use arrays nor pointer arithmetic). This works for all of Zig, except for some very specific, clearly marked, "unsafe" operations.
replies(1): >>24293757 #
39. kristoff_it ◴[] No.24293657[source]
Just as a reminder, the Zig Software Foundation (501c3 non-profit) is looking for donations in order to be able to pay developers working on the compiler.

https://github.com/sponsors/ziglang

40. renaicirc ◴[] No.24293660[source]
Do you have any example code? It's plain to see that Zig's comptime is powerful enough for typeclasses, but it's not at all obvious that it'd be as ergonomic as Haskell's typeclasses.
replies(1): >>24293734 #
41. azakai ◴[] No.24293661{3}[source]
My understanding is that Zig goes further than that. In particular it just added a safe allocator suitable for production,

https://github.com/ziglang/zig/pull/5998

edit: For more details, see

https://ziglang.org/#Performance-and-Safety-Choose-Two

42. cmrdporcupine ◴[] No.24293667[source]
Realistically a $$ career doing embedded or systems-level work will require excellent C and C++, and Zig (or Rust) would just be icing on top if you could find an employer willing to pay you to work in it.

The good thing is that mastering one of these languages gives you conceptual tools which help with becoming at least competent in the others, if not mastering them as well.

43. cmrdporcupine ◴[] No.24293690{3}[source]
To 'master' C is actually realizing C itself is not as simple as it looks from its syntax. It's an old language and the implementations are by no means straightforward. I'm by no means a C master but I have worked with people who are, and they know nuances of the language and the way it compiles down to various platforms in ways that shame me.

But in general I have gone for for generalist not specialist in my career.

replies(1): >>24293949 #
44. pron ◴[] No.24293734{3}[source]
I don't have any particular examples in hand, but the question you're asking is one that's tough to answer because the languages that might be as ergonomic as Haskell in that regard and are also low-level are significantly more complex than either Haskell or Zig, and so I don't think we have a good point of comparison (I think Zig is revolutionary). There is definitely a price to pay for being a high-control/low-level language, and it certainly requires that you spend some of your "complexity budget" on things that high-level languages like Haskell or Java don't have to. But I think Zig shows that you can be both low-level and reasonably "expressive" without also being so much more complex than most high-level languages.
45. TinkersW ◴[] No.24293757{5}[source]
It works with arrays if you stick with std::array & std::span like constructs.

It also works with iterators-generally by sticking some extra data in the iterator in dev builds, so it can check for out of bounds access.

If I do have some code that uses C pointers + size, I'll insert some dev build assertions.

replies(1): >>24293820 #
46. pron ◴[] No.24293820{6}[source]
Sure, and then when you enforce that you address the third most bothersome thing for me in C++, leaving you only with the top two (for me): a complex language and slow compilation.
47. fanf2 ◴[] No.24293828{4}[source]
Never mind modern systems, malloc() was never a syscall :-) One of the great things about K&R is that it shows you how to implement parts of the C library, including a simple malloc(), demonstrating that the library does not need to be magical.
replies(1): >>24294936 #
48. TinkersW ◴[] No.24293840{5}[source]
It obviously isn't as safe as Rust, but I think it works well enough for something like gamedev(where absolute safety isn't required).

For memory related issues I find it sufficient.

One aspect where it is probably not as good as Rust is for threading related issues, as it relies on inserting runtime checks which may or may not trigger depending on the number of threads attempting access.

49. logicchains ◴[] No.24293861{3}[source]
Literal interns are not very likely to work on it, but juniors might, and the junior's probably not going to know much more than an intern.
50. littlestymaar ◴[] No.24293919[source]
[deleted]
replies(2): >>24294076 #>>24294162 #
51. dcolkitt ◴[] No.24293926{3}[source]
To be honest, I'd be a lot more worried about physics PhDs then I would interns. I've seen plenty of 20 year-old engineering students write solid low-latency code. I can't say the same thing about string theorists.

It'd be pretty unusual for junior or non-technical people to write code in "core" components of the system. Things like datafeed parsers, order handlers, inventory management, safety checks, networking libraries, exchange connections, etc.

But even with all these layers in place, you still need an actual strategy to run at the end of the day. Everything in the quoter can be optimized to hell, but if the strategy module is spinning for 1000+ microseconds because it's running some bloated ML model, then none of that really matters.

Usually the system engineers and the strategists are different people. Not always. Especially in the case of more straight-forward mechanical strategies. But anything reasonably complex usually requires dedicated quants with different skillsets than profiling C code.

replies(2): >>24294998 #>>24300368 #
52. flohofwoe ◴[] No.24293949{4}[source]
Well yes, but in the end all languages have this sort of fractal nature.

But there is diminishing value in how deep you want to go into the rabbit hole. Of course there's always more to learn, but with C you're fairly quickly leaving the language and move into the layers of compiler- and hardware-trivia (good to know nonetheless, but often not really relevant for being productive in C) where in other higher-level languages you're still working your way through the standard library ;)

replies(1): >>24294596 #
53. vmchale ◴[] No.24293952[source]
Linear types would be even better though. Still safe like Rust.
replies(2): >>24296793 #>>24297019 #
54. littlestymaar ◴[] No.24293966{4}[source]
> It does so with runtime checks that are turned on in development and testing and turned off -- either globally or per code unit -- in production.

This isn't “memory safety”, with this reasoning you could say “C is memory safe if you use ASAN during debug”: it is exactly equivalent except Zig checks are less powerful than the full suite of existing sanitizers for C, but it's enabled by default in debug mode, which is nice.

replies(1): >>24294039 #
55. littlestymaar ◴[] No.24294000[source]
> What a truly inspiring language

It's indeed an inspiring language, and rust is taking inspiration from it already: https://github.com/jswrenn/project-safe-transmute/blob/rfc/r...

> lack of generics

I can't wait before Zig2 comes and eventually adds generics…

replies(5): >>24294122 #>>24294488 #>>24294589 #>>24294667 #>>24296780 #
56. bigbizisverywyz ◴[] No.24294005[source]
> ...lack of macros and lack of generics ... To be fair, you can do absolutely everything you want to do in C++ without using either of these features at all, it just takes a bit of discipline.

And if you see the flack that go gets for not including generics then I'm not so sure that that is a great way to get people to adopt your language.

replies(2): >>24294138 #>>24294467 #
57. HourglassFR ◴[] No.24294015[source]
I've stumbled upon the Zig language a while back and have been checking in regularly to follow its progress. Recently I took the time to write a very small program to get a feeling for it. My thoughts :

- It's a very low level language. Having written mostly Python for the past few years, it is quite the contrast. I had to force myself to think in C to get the train going

- Getting my head around the error handling took more time than I'm willing to admit. In the end, it's like having exception but being more explicit about it. It feels nice when you get the hang of it

- The documentation of the standard library is severly lacking, to be fair the language is still very young. More worrysome, it feels very cluncky.

- No proper string support. It is sad that a modern language still goes down that route after Python has shown that correcting this is both definitely worthwhile and a word of pain.

- I have the feeling that optional and error union types are a bit redundant, but I have not written enough Zig to have a real intuition on that. Maybe it is just that I understand monads now.

replies(5): >>24294053 #>>24294562 #>>24295081 #>>24298209 #>>24300406 #
58. ikskuh ◴[] No.24294036{4}[source]
> I see very few people suffering from such burdens, but a great many suffering from its exact opposite: using a low- or mid-level language to write hundreds of lines where ten lines in a higher-level language would suffice and be more easily verified as correct.

I see a huge load of people suffering from those burdens. Higher-level languages tend to be less efficient and less optimal. Yes, they take a burden from the programmer and move that burden onto the end user.

So the programmer is having the easy life, while every user now waits a second longer for the program startup, a second longer for opening the file dialog, and so on. Doesn't sound much, but if you think about it: Worst case is: 1 programmer shaved off two weeks of work on an app that is used by every person on the planet. 7 billion users lost 1 second. In total, humanity lost roughly 200 years of productive time when every person starts the app exactly once.

Modern computers are incredibly fast and we as programmers use that brutal power to be more lazy than before instead of leveraging that power to all of the users of our tools. We could have systems that go from clicking the power button to being readily available for use in less than seconds. Think about this when you chose a high level language that exchanges programmer convenience for runtime cost and think about if it other peoples time is worth your lazyness.

</rant>

disclaimer: don't take it personally, i'm just frustrated about imperformant software

replies(1): >>24303341 #
59. pron ◴[] No.24294039{5}[source]
No, this is full memory safety enforced through runtime checks. ASAN does not give you that. Zig has only arrays and slices with known sizes and no pointer arithmetic (unless you explicitly appeal to unsafe operations).
replies(2): >>24294204 #>>24294285 #
60. gameswithgo ◴[] No.24294053[source]
Regarding strings, nobody doubts that having good string support built into a high level language is a good idea, but with a low level language that has a lot of problems. Especially a language whose design principle is to avoid surprising allocations. So that is necessarily going to make things like concatenating strings more complex, and so on.
61. ◴[] No.24294076{3}[source]
62. dnautics ◴[] No.24294122{3}[source]
Zig already has generics, gp was imprecise. They are not a part of the language, but an emergent feature that is simple enough to implement using comptime.

Edit: clearly gp was not mistaken, just imprecise.

63. dnautics ◴[] No.24294138{3}[source]
Zig has generics, they are just not part of the language (very easy to implement using what zig gives you).
64. logicchains ◴[] No.24294162{3}[source]
I didn't say I actually use it, I said it's the perfect language for that use case. In practice there are many other factors that determine whether a language is adopted, e.g. library availability, compiler maturity, and the fact that it's not simple to integrate a new language into an existing codebase of very many lines of C++.
replies(1): >>24294327 #
65. littlestymaar ◴[] No.24294204{6}[source]
> full memory safety enforced through runtime checks [disabled in production]

Which means you cannot guaranty your program doesn't exhibit memory unsafely unless you stumble upon it during testing. Yes there are fewer footguns in Zig[1] than in C (which is the opposite of C++), but dangling pointer dereferences, double free and race conditions will still be lurking in any reasonably-sized codebase. Calling it “memory-safe”, is dishonest. And I actually don't think it serves Zig. It's a clever language with tons of good ideas, no need to oversell it with misleading claims about memory safety.

[1]: but still plenty of them https://ziglang.org/documentation/master/#Undefined-Behavior

replies(3): >>24294443 #>>24294505 #>>24303987 #
66. dnautics ◴[] No.24294235[source]
It's pretty easy to use Zig as the low level for languages which can take a C FFI. Self-promotion: I wrote zigler which integrates zig as inline code in elixir.
replies(1): >>24300373 #
67. voldacar ◴[] No.24294240{4}[source]
It isn't really a matter of can do/cannot do. It's more about the default patterns promoted by the idiomatic way of writing code. Yeah you could write C++ code that constantly passes around allocators while also using STL heavily, but it will be verbose, unnatural, and ugly.

As well as having nicer syntax in general and real metaprogramming instead of the brain damage that is templates, zig promotes this kind of allocator-aware programming style in a way that's clean and idiomatic

replies(1): >>24294275 #
68. dnautics ◴[] No.24294268[source]
I was very rusty with c and found learning zig to be a breeze. If you know c by heart you'll do great!
69. cycloptic ◴[] No.24294275{5}[source]
I'm not sure what you mean that it's verbose, unnatural and ugly. To me it looks the same.

In C++, you have to pass around an allocator to your templates. You can typedef this away if you want.

In Zig, you have to pass around an allocator as a function argument or a struct member. You can comptime this away if you want.

Is there some fundamental way that I missed that Zig changes this? If your actual complaint is that C++ templates are bad and you're saying Zig comptime is better, that's different than having woes about allocators.

replies(2): >>24298526 #>>24299764 #
70. rfoo ◴[] No.24294285{6}[source]
I thought after Intel MPX we can all agree "memory safety" in modern languages is more about temporal stuff (i.e. use-after-free, etc) than bounds check, but maybe I'm wrong.

How does those runtime checks kill UAF?

replies(1): >>24294637 #
71. littlestymaar ◴[] No.24294327{4}[source]
Thanks, I was very afraid for a moment. In fact, I recently worked for a company which started using Rust in a mission critical setting back in… 2013 (yes, long before the language was stable and its future secured). Fortunately it worked, but still, it was far from a safe move.
replies(1): >>24296566 #
72. skocznymroczny ◴[] No.24294403[source]
D uses a garbage collector by default, but it has a @nogc annotation to mark blocks of code that get statically verified not to allocate through the garbage collector.
replies(1): >>24294636 #
73. pron ◴[] No.24294443{7}[source]
> Which means you cannot guaranty your program doesn't exhibit memory unsafely unless you stumble upon it during testing.

Right, if you disable those checks.

> Calling it “memory-safe”, is dishonest.

The language is (or will be) memory safe, at least its safe parts -- C/C++ aren't. True, that safety can be disabled selectively. Yes, there can then be undefined behaviour if you disable that safety, but so can Rust when you disable safety selectively. The design is just different. Just remember that using a language with sound guarantees is no one's goal. The goal is to produce programs without bugs. Sound guarantees in the language is one approach toward that goal; there are others.

Generally, guaranteeing that some set of bugs cannot happen does not necessarily guarantee fewer bugs overall; in fact the opposite might be true. Nobody knows if Zig's strong safety story yields to more correct programs that Rust's, but nobody knows the opposite, either. There are good arguments either way but little data. In any event, Zig is much safer than C, even with ASAN.

replies(1): >>24294595 #
74. ◴[] No.24294467{3}[source]
75. pron ◴[] No.24294488{3}[source]
> and rust is taking inspiration from it already

But C++/Rust can never have Zig's primary feature -- simplicity. Zig's power is not that it has comptime, but that it has little else.

> I can't wait before Zig2 comes and eventually adds generics…

No need. Zig gives you the same capabilities as generics do, only through a separate feature that other languages also have in addition to generics. In other words, it has generics, but without having generics as a special construct. Zig recognises that once you have that other feature (compile-time introspection) you don't need generics as a separate construct, but they can be just an instance of that single construct.

replies(1): >>24294721 #
76. judofyr ◴[] No.24294505{7}[source]
> Calling it “memory-safe”, is dishonest.

I'm not sure why we desperately need to classify languages into "memory-safe" or "not memory-safe". The fact is that all languages have various levels memory-safety (sun.misc.Unsafe anyone?) and I do think that Zig deserves recognization for addressing it heads on:

- There's a separate ReleaseSafe optimization level which has all the optimizations, but remains safe. If you care about memory-safety (which most people do!) then this should be your default production build.

- The documentation is very clear about what can cause undefined behavior in ReleaseFast.

- You can override the safety-level (both ways!) for a single scope. If you have something performance critical that can't be expressed in safe Zig you can opt-in to potential undefined behavior. If you have something safety-critical you can opt-in for correctness even though the overall program is compiled with ReleaseFast.

77. egnehots ◴[] No.24294507[source]
Rust supports a global custom allocator.

Per container allocators are on the roadmap : https://github.com/rust-lang/rfcs/blob/master/text/1398-kind...

78. jeltz ◴[] No.24294562[source]
I would hardly use Python as an example of good string support. Of all languages I have worked with it has some of the worst. Look at Rust instead for a modern language with good string support.
replies(1): >>24297569 #
79. DennisP ◴[] No.24294589{3}[source]
Looks like comptime is what they already have instead:

https://ziglang.org/documentation/master/#Introducing-the-Co...

80. littlestymaar ◴[] No.24294595{8}[source]
> The language is (or will be) memory safe.

We've been going full cicle here, so I'm not interested in spending more time in this conversation.

81. cmrdporcupine ◴[] No.24294596{5}[source]
C exposes a lot of things, and also hides a lot of things about the underlying system that can get confusing. What's an "int"? Or a "long" You need to know for your platform what the bit width is on your platform, because it's not explicit in the name, and the language is willing to do a bunch of implicit stuff behind the scenes with only a warning or two. Should you really be using 'char'? Is yours a legit use of it or did you mean uint8_t? Other high level languages generally tend to have more sensible default patterns for these things, C ... it gives you all kinds of ammo to shoot yourself with.

It's not as big of a problem these days with things becoming less heterogeneous; almost everything is little endian now, much of it 64-bit but at least 32-bit, and we can kind of rely on POSIX being there most of the time. Most new code uses stdint.h and is explicit about word lengths by using int32_t, etc. and follows good conventions there.

But venture off the beaten path into odd microcontrollers or into retro machines or port older code or whatever ... and there's glass hidden in the grass all over.

C also exposes a model of the machine that looks low level but behind the scenes a modern processor does all sorts of branch prediction and pipelining and so on that can blow up your assumptions.

What looks like optimized clever C code can actually end up running really slow on a modern machine, and vice versa.

replies(1): >>24294917 #
82. logicchains ◴[] No.24294636{3}[source]
Is there any way to mark "this code does not call malloc"? Or maybe more generally, "this code does not use anything from libc"?
83. pron ◴[] No.24294637{7}[source]
> https://github.com/ziglang/zig/pull/5998

TBD :)

But here's one way that's currently being tried: https://github.com/ziglang/zig/pull/5998

replies(1): >>24298056 #
84. irq-1 ◴[] No.24294667{3}[source]
In Zig types are values (same as any other value.) This lets you do generics without the need for any special syntax; you can simply pass types as parameters and return types.
85. littlestymaar ◴[] No.24294721{4}[source]
> But C++/Rust can never have Zig's primary feature -- simplicity

Sounds like a Go pitch, except Zig ain't Go. And while comptime is a cool feature, it's also a really complex one!

> In other words, it has generics, but without having generics as a special construct. Zig recognises that once you have that other feature (compile-time introspection) you don't need generics as a separate construct, but they can be just an instance of that single construct.

This has advantages (only one feature to know), but it also has a big drawback: the lack of orthogonality. C is also simple, for instance it has no concept of errors (only return values) or arrays (only pointers), but most people won't consider this a good idea (and Zig didn't follow C on either of those two design points)

Zig is cool, but I hoped the “generics are too complex of a feature” meme would die now that Go is getting generics, and I'd be really sad to see come back…

replies(2): >>24295037 #>>24298998 #
86. jorangreef ◴[] No.24294737[source]
The first rule of C is that no one masters C, but you could try anyway and still have time to master Zig in a matter of weeks, which is a rounding error. Given that both offer a C compatible ABI, what would serve your projects better?
replies(2): >>24294781 #>>24294866 #
87. logicchains ◴[] No.24294749{4}[source]
A concrete example is std::stable_sort. As far as I'm aware there's no way to pass it a custom allocator/buffer to avoid it allocating memory.
replies(1): >>24300103 #
88. cmrdporcupine ◴[] No.24294781{3}[source]
<rant-time>

I can't help but feel like in our industry C is successful (vs its 80s competition of Pascal/Modula-2, or Ada etc.) partially because of some of the same reasons that Git is successful now. Yes, it is powerful and flexible; but also in some ways unnecessarily arcane and 'dangerous' and _this gives the user a feeling of cleverness_ that is seductive to software engineers.

Put another way: Most of us enjoy the mental stimulation of programming, and we enjoy the mental challenges (in general). C makes us feel clever. Witness the "obfuscated C programming contest" etc.

Same thing that has led to nonsense 'brain teaser' whiteboard-algorithm tests at job interviews. IMHO it's in many cases for the benefit of the interviewer's ego, not the company or the interviewee ("gotcha! no job for you!").

</>

replies(2): >>24294845 #>>24296848 #
89. jorangreef ◴[] No.24294845{4}[source]
"Put another way: Most of us enjoy the mental stimulation of programming, and we enjoy the mental challenges (in general). C makes us feel clever. Witness the "obfuscated C programming contest" etc."

Yep, only C makes me feel stupid (but I enjoy that experience too!).

replies(1): >>24294884 #
90. dnautics ◴[] No.24294866{3}[source]
If one does both, almost certainly learning zig will make for a better C programmer, as zig often forces you into patterns that would be best practices for a C programmer.
91. cmrdporcupine ◴[] No.24294884{5}[source]
Oh don't get me wrong, I'm a philosophy major drop-out, not a CS student. :-) I have never gotten off on clever-C, and it makes me feel stupid, which yeah, isn't awful either (humbling).

Luckily my day-job has nothing to do with mental gymnastics even though I'm a software engineer at Google and work in plenty of low-level stuff. Most sensible software development bears little resemblance to the stuff on whiteboards in coding interviews etc.

After 20 years of this I know the right thing is to reach for a library, and if that doesn't exist, then reach for Knuth or some other reference rather than try to write it myself from scratch.

92. dnautics ◴[] No.24294917{6}[source]
If what you're talking about (obfuscated int) is due to c being a victim of its own success, hardware manufacturers implementing C's that elided the meanings of these types to match their own architecture, against the long term best interests of C, to "make porting code easier" in the short term?
replies(1): >>24296682 #
93. mratsim ◴[] No.24294929{4}[source]
> Zig has just comptime, through which it achieves all those goals (minus some macro capabilities that it deems harmful anyway) with just one, very simple, cohesive construct. You could argue on whether you like this or not, but you can't argue that Zig's approach isn't fundamentally more minimal. Zig is a language you can reasonably fully learn in one day; I don't think you could say the same about Nim.

"some macros" are downplaying Nim macros, it's like saying Lisp has some AST rewrite capabilities.

Nim macros goals are two-folds:

1. adding functionality to the language without baking it in the compiler. A prime example is ``async``, ``async`` including a nice async/await syntax can be completely implemented as a library without reserving keywords to do things like `pub async fn`.

2. Automating away boilerplate.

From what I understood, Zig comptime is only about making compile-time function evaluation first-class.

replies(1): >>24295101 #
94. petr_tik ◴[] No.24294936{5}[source]
Thanks you and your parent for pointing this out! I should be more precise, sbrk is the underlying system call that might be invoked inside malloc
replies(2): >>24295137 #>>24295169 #
95. petr_tik ◴[] No.24294998{4}[source]
> But even with all these layers in place, you still need an actual strategy to run at the end of the day. Everything in the quoter can be optimized to hell, but if the strategy module is spinning for 1000+ microseconds because it's running some bloated ML model, then none of that really matters.

From what I have heard, Optiver have a performance lab, which replicate real conditions with an exchange replayer and they can measure wire-to-wire latency for every release.

Hiring people for their maths chops as quants, you probably don't expect them to know about HW-level optimisations at the beginning of their finance careers, which, I guess, is the reason for such a performance lab. Build tools that help people bring their best skills to the table and catch regressions.

96. pron ◴[] No.24295037{5}[source]
> it's also a really complex one!

No, it's a very simple one, so much so that it's erasable: https://news.ycombinator.com/item?id=24293611 And still it is probably the most complex aspect of Zig.

> Zig is cool, but I hoped the “generics are too complex of a feature” meme would die now that Go is getting generics, and I'd be really sad to see come back…

You've misunderstood me. Generics are a good thing -- if that's all you have. But if you have generics and procedural macros, it turns out that you can do the work of both with a feature that's simpler than either. The capability generics add is a very important one, but given that low-level languages need another one as well, it turns out that generics can be subsumed into that one without being a separate and additional construct. Zig has generic types and concepts/typeclasses; these just aren't atomic language constructs.

replies(1): >>24297905 #
97. AnIdiotOnTheNet ◴[] No.24295081[source]
Standard library documentation is indeed clunky as it is auto-generated for the most part. This is something the community has been working on improving but it isn't a priority at this stage in part because the standard library undergoes breaking changes pretty frequently right now.

Optional and ErrorUnion are a tiny bit redundant in that one could represent the Optional as another value in an ErrorUnion, and that might even happen as an optimization step in the case of ?!/!? types at some point in the future, but they have very different handling in the language as they are used for very different things.

I personally like that Zig doesn't bother with "strings" at a language level at all and just considers everything as arrays of bytes. String handling is a complexity nightmare and I feel that Zig wisely chooses to be simple instead.

replies(1): >>24304652 #
98. pron ◴[] No.24295101{5}[source]
Yes, plus introspection. Zig tries very, very hard to avoid macros, so macros are an anti-feature from Zig's perspective. That you could do what Zig finds important for its domain, like conditional compilation, writing a typesafe println, generic types, and generating pretty-printing routines all in simple Zig without macros is a cool discovery. I don't know if it's true, but I think the desire to avoid macros at all cost was a bigger motivation for Zig's design than, say, generic types.

In other words, there is a capability here that Nim really, really wants, and that Zig really, really doesn't want, so on that front they are not competing in their designs.

replies(1): >>24296439 #
99. voldacar ◴[] No.24295137{6}[source]
mmap usually these days
100. fanf2 ◴[] No.24295169{6}[source]
Yes, and often mmap() for large allocations and other parts of the heap.

There has been an interesting discussion about memory management in Ritchie’s PDP11 C compiler on the TUHS list this month https://minnie.tuhs.org/pipermail/tuhs/2020-August/thread.ht... from the era when large programs could not necessarily afford the overhead of malloc() so sometimes used sbrk() directly.

101. andi999 ◴[] No.24295225{3}[source]
Is there something in between homepage and docs? Like 30-50 line workout programs?
102. AsyncAwait ◴[] No.24295336{4}[source]
I meant a borrow checker, which I assumed was clear from what I was replying to. Yes, Zig does do runtime checks in dev builds and I did not mean to imply otherwise, I don't think the runtime checks provide the same set of benefits.

Just so we're clear, I like Zig :-)

replies(1): >>24297283 #
103. gw ◴[] No.24296439{6}[source]
I do love how zig's comptime naturally led to generics without extra syntax. But after using nim i'm convinced that macros make even more sense for systems programming. They can even affect performance -- nim's macros can generate types that would be difficult to write by hand.

I also take issue with your statement that zig is "more minimal" since that only applies to the user's perspective -- from the compiler's perspective, macros make a language far more minimal. But i vaguely recall already discussing this distinction with you so i don't want to rehash it.

At any rate i will definitely be paying attention to andrew's progress, he has a really clear vision.

replies(1): >>24297353 #
104. The_rationalist ◴[] No.24296566{5}[source]
With retrospective, do you regret choosing rust? It's poor ecosystem with low human resources can kill a startup.
replies(1): >>24298192 #
105. cmrdporcupine ◴[] No.24296682{7}[source]
Many many decisions made over a 40 year history add up to potential confusion.
106. PaulDavisThe1st ◴[] No.24296780{3}[source]
Clearly you meant zig++
replies(1): >>24304961 #
107. klodolph ◴[] No.24296793{3}[source]
I also wish that Rust had linear types, it would make a lot of FFI easier without resorting to unsafe{}.

With linear types you can guarantee that a destructor is run, so you can create objects with lifetimes that are not only bounded from below, but bounded by above. There are some common patterns in e.g. C libraries that rely on this--for example, you might register a callback on an object, and you want to assert that the object's lifetime is shorter than the callback's (so the callback is never called outside its lifetime).

Since Rust doesn't have linear types, you have to use unsafe{}.

replies(1): >>24297225 #
108. vmchale ◴[] No.24296835{3}[source]
That's not on par with linear or affine types.
replies(1): >>24299103 #
109. PaulDavisThe1st ◴[] No.24296848{4}[source]
Given that you can "write Fortran in any language", I find this analysis unlikely.

I much prefer writing Python or Lisp code than C++, but I can't do my job in Python or Lisp code, so I write C++.

110. vmchale ◴[] No.24296869[source]
C has many advantages over Zig, mostly because it's standardized and extant.

Don't think Zig is worth it when it doesn't have linear or affine types.

111. PaulDavisThe1st ◴[] No.24296893{7}[source]
Frankly I find HN more of an impediment to rapid development than C++ compile times.
replies(1): >>24300472 #
112. someduke ◴[] No.24297019{3}[source]
http://ats-lang.sourceforge.net/ baby!
113. nextaccountic ◴[] No.24297225{4}[source]
What about making the object borrow from the closure?

This can be accomplished by a method that consumes the object and returns the closure (which now owns the object), and a closure method's that borrows the object back from it.

replies(2): >>24298141 #>>24298872 #
114. pron ◴[] No.24297283{5}[source]
> I don't think the runtime checks provide the same set of benefits.

Of course not, but that doesn't mean Zig is less effective at achieving correctness. It just does so in a different way -- it trades sound guarantees for less sound ones and a simpler language with a faster cycle. Is one better than the other? Hard to say. Only empirical study could settle that.

replies(1): >>24298562 #
115. pron ◴[] No.24297353{7}[source]
> But after using nim i'm convinced that macros make even more sense for systems programming.

Macros are controversial. I love them in Scheme and Clojure, but I wouldn't want them in any language aimed at a larger, more mainstream crowd. At the very least, macros introduce another meta-language to know (and if they're in a language with a complex type-level language like Rust or Haskell then they're a third language within the language), but I think it's a matter of personal aesthetic preference.

> But i vaguely recall already discussing this distinction with you so i don't want to rehash it.

Maybe we did. :)

replies(1): >>24302037 #
116. earthboundkid ◴[] No.24297569{3}[source]
I believe OP was saying that the Python 2 → 3 debacle shows it's important to get strings right the first time.
117. identity0 ◴[] No.24297763[source]
There are literally 100s of high level languages, for every imaginable programming paradigm. If you want to write fast code where you control memory and don't use a garbage collector, C, C++, and Rust are your only options. It's nice to see new additions to the list.
replies(1): >>24298261 #
118. littlestymaar ◴[] No.24297905{6}[source]
> But if you have generics and procedural macros, it turns out that you can do the work of both with a feature that's simpler than either.

Here I think we just have a different subjective perception of what simplicity is. I much prefer have two orthogonal systems which do their own business than having a single more powerful tool than do both (like having slices + references instead of the all powerful pointer)

Anecdotal note: more than 10 years ago, the Go team pitched why they didn't need generics nor macros, because code generation would solve both problems (+ others), and now they're on their way back to add generics to Go (with a lot of hassle).

replies(1): >>24298572 #
119. littlestymaar ◴[] No.24298056{8}[source]
(Small copy past error here, you posted the same link twice.)

Regarding https://github.com/ziglang/zig/pull/5998 here we're exactly in the realm of C, changing the allocator with a custom one with additional bookkeeping to check for memory management issues. But it tanks performances so you can't generally use it for production (and if you were in a situation were you'd do it anyway, you'd be better off with completely automatic memory management: AKA a GC).

replies(1): >>24298625 #
120. samatman ◴[] No.24298141{5}[source]
I'm not 100% confident I follow, but this sounds like one of those backflips Rust programmers do to satisfy the borrow checker.

As in, you wouldn't write the code this way if you didn't have to. You do get memory safety in return... but you can see where the desire for a more eloquent approach might arise.

replies(1): >>24300381 #
121. littlestymaar ◴[] No.24298192{6}[source]
I didn't make the pick (and I wouldn't have picked Rust then, I still judge this move as way too risky), and I merely worked as a contractor there and the project was already 5 years old.

That being said, a few notes on Rust on this project:

Cons:

- finding people proficient in Rust was a challenge (but that's why I go hired, so for me that was a plus;).

- in the first few years of the project, keeping up with language changes (before Rust 1.0 and even after because the project had been using nightly Rust until 2018) added.

Neural:

- the library ecosystem was nonexistent at the beginning, but because Rust has good C interop, the project just used C libraries for different things. Some where replaced by pure Rust ones later on, some didn't.

Pro: (Note: the project had important performance requirements (regarding CPU and memory consumption) so had Rust not been chosen, the project would have been written in C++.)

- When your Rust program compiles, it never crashes (except on an assert)

- I've spent exactly 0 minutes in a debugger on that project

- I've done massive refactoring without issues: you just fix the compiler errors (which are now extraordinarily clear!), you recompile and it works.

So overall, the Rust bet was a big success for this project! But you're right: the company wasn't a start-up and the company's ability to count on an existing team was vital here because hiring a new Rust dev would have been impossible in the first few years. With Rust becoming more and more popular each year, the hiring issue shouldn't be as acute right now (well, especially since Mozilla fired dozens of Rust-fluent developers earlier this month…)

122. networkimprov ◴[] No.24298209[source]
Here is a proposal for cleaner error handling, using named catch blocks:

https://github.com/ziglang/zig/issues/5421

123. the_duke ◴[] No.24298257[source]
There is some ongoing work towards custom allocators for containers in Rust std. [1]

Right now you could also go no_std (you still get the core library, which does not contain any allocating data structures) and use custom containers with a passed in allocator.

Zig is definitely a cool language, and it will be interesting if they can come up with good solutions to memory management!

But sentences like these in the documentation [2] would make me prefer Rust for most low level domains (for now):

> It is the Zig programmer's responsibility to ensure that a pointer is not accessed when the memory pointed to is no longer available. Note that a slice is a form of pointer, in that it references other memory. ...

> ... the documentation for the function should explain who "owns" the pointer

[1] https://github.com/rust-lang/wg-allocators

[2] https://ziglang.org/documentation/master/#toc-Lifetime-and-O...

replies(3): >>24299049 #>>24300657 #>>24301702 #
124. samatman ◴[] No.24298261{3}[source]
Ada, D, Pascal/Delphi, and yes, Fortran, are all still options.

I think Zig has the potential to be the best of them, fwiw.

125. edflsafoiewq ◴[] No.24298380{3}[source]
Does zig have an answer to RAII yet?
replies(1): >>24300178 #
126. fsociety ◴[] No.24298526{6}[source]
The difference is you have to pass an allocator to the standard library functions in Zig. That’s why it is idiomatic compared to C++.
127. fsociety ◴[] No.24298562{6}[source]
I’d argue it does mean it is less effective at achieving correctness but the trade-off made is the whole point of Zig. Simple language that is really a better C.
replies(1): >>24298692 #
128. pron ◴[] No.24298572{7}[source]
> I much prefer have two orthogonal systems which do their own business than having a single more powerful tool than do both (like having slices + references instead of the all powerful pointer)

OK, but that's not quite the situation. Here we're talking about languages that have, or will have, the single "more powerful" construct, and also the more specific, special case one, as two separate constructs, even though one of them would have sufficed.

Again, Zig has parameterized types, and very elegant and powerful ones -- they're functions that take some types as argument and return a type. It just doesn't have generics as a separate construct. Rather, it is a special case of a more general one (that Rust and C++ will also have).

129. pron ◴[] No.24298625{9}[source]
No, Zig is not in the realm of C. Zig gives you full memory safety that you can then selectively turn off. Why is it useful? For the same reason tests are useful even if they don't give you sound guarantees, and are still the primary way of achieving correctness, even in Haskell or Rust. C does not and cannot do this the same way as Zig does, because C cannot be made safe (well, it can, but that's a whole other can of worms) while Zig can. So you make Zig safe, test it, and then remove the guardrails from the performance-critical bits after you're satisfied with their correctness.

Does it provide safety in the same manner Rust does? Absolutely not. Does it provide less correctness overall? Maybe, and maybe it provides more correctness, and maybe the same. It's hard to say without an empirical study. The problem is that sound guarantees often come at a cost -- for example, to language complexity and compilation speed -- that can have a negative effect on correctness.

replies(2): >>24298722 #>>24299513 #
130. pron ◴[] No.24298692{7}[source]
It's hard to make a definitive argument one way or the other because the guarantees Rust makes come at a cost of compilation speed and language complexity that can have a negative effect on correctness. This question is impossible to answer without an empirical study.

Unlike C, Zig is (or will be) memory safe, although its safety can be turned off, and often is -- after testing. Unlike C, it provides powerful abstraction capabilities similar to those of C++. The fact that it can do all that yet be very simple seems to suggest at first glance that it's "like C" but that's because we've never had a language like that before. Zig's simplicity is misleading. It turns out you can do a lot with a very simple language. We knew that to be true for high-level languages like Scheme, but Zig shows it's possible in low-level languages, too.

replies(1): >>24299620 #
131. littlestymaar ◴[] No.24298722{10}[source]
> C does not and cannot do this the same way as Zig does

Regarding the PR you just sent, I'd like to hear why you think it cannot be applied to C?

> So you make Zig safe, test it, and then remove the guardrails from the performance-critical bits after you're satisfied with their correctness.

This isn't safety… This is “we didn't find any memory issue while fuzzing the software” and you'd get the same guarantee: if your fuzzer didn't cause the memory issue, then it remains in your code in production, waiting to explode one day with some hard to debug Heisenbug that only occurs once in a million…

replies(1): >>24298913 #
132. klodolph ◴[] No.24298872{5}[source]
> What about making the object borrow from the closure?

That doesn't actually guarantee that the closure will outlive the object. You would need linear types in order to do that, and Rust does not have linear types.

In the particular API I'm working with, you pass the closure in to an object when you create it, and you have to make sure that the closure outlives the object. The only way to do that within the Rust type system is by making the closure 'static, which is... less than ideal. So you use unsafe{} instead.

This is because any object may outlive its specified lifetime. Lifetimes are only lower bounds. So if I have lifetimes 'b and 'a, and 'b : 'a, this only means that the LIFETIME 'b must be at least as long as the LIFETIME 'a, but any particular object with lifetime 'b may live arbitrarily long, as long as it lasts at least as long as 'b. And any object which is 'a may last arbitrarily long, but at least as long as 'a.

133. pron ◴[] No.24298913{11}[source]
> Regarding the PR you just sent, I'd like to hear why you think it cannot be applied to C?

I didn't mean that a safe allocator cannot be used in C; I meant that C cannot be made memory safe in its entirety as simply as Zig can. Why? Because C has pointer arithmetic while (safe) Zig doesn't, Zig has slices while C doesn't, and C has non-typesafe casts while safe Zig doesn't.

> This isn't safety… This is “we didn't find any memory issue while fuzzing the software” and you'd get the same guarantee:

No, it's not the same guarantee. Fuzzing a C program will not find all the undefined behaviour that fuzzing a Zig program can, for the reasons I mentioned.

It is true that if you use unsafe Zig, i.e. turn off safety for a whole program or some sections of it, you lose the guarantees that safe Zig gives you, and unsafe Zig is indeed not safe (neither is unsafe Rust). But because of the way it's designed, Zig has a way of improving correctness even when safety is removed. This is a tradeoff for sure, but so are sound guarantees, that can have other negative effects on correctness.

replies(1): >>24300895 #
134. rightbyte ◴[] No.24298998{5}[source]
"C is also simple, for instance it has no concept of [...] arrays (only pointers)"

That is a confusing part with C. C do have a concept of arrays, but not as function arguments. You notice it first with multidimensional arrays.

135. pron ◴[] No.24299049{3}[source]
> But sentences like these in the documentation [2] would make me prefer Rust for most low level domains

While such use-after-free issues are not prevented at compile time, the plan is to ultimately have safe Zig catch them (and panic) at runtime, i.e. safe Zig shouldn't have undefined behaviour. Because this is done with runtime checks, the expectation is that those checks will be turned off (selectively, perhaps) in production, after testing has satisfied you. In that case, the guarantees aren't as strong as Rusts, but those guarantees come at a significant cost -- to language complexity and compilation time -- that can also have a negative effect on correctness. So while Zig's approach to safety/correctness is certainly very different from Rust's, I don't think it is necessarily weaker (perhaps it could even be stronger, but the question is hard to settle).

replies(2): >>24300060 #>>24300135 #
136. pron ◴[] No.24299103{4}[source]
When compared in isolation, yes. But such mechanisms aren't free; they add to both language complexity and compilation time, two things that can have a negative impact on correctness. So it's impossible to say which approach leads to more correct programs overall without empirical study.

We see similar tradeoffs of soundness in formal verification as well. We're not talking about exactly the same thing here (because affine type safety is compositional) but the general principle is the same: soundness has a cost, and it is not necessarily the most efficient way of achieving a required level of correctness.

Anyway, I think that both Rust and Zig have very interesting approaches to safety, but I don't think we know enough to claim one is more effective than the other at this time.

137. nyanpasu64 ◴[] No.24299417{3}[source]
std::span is a modern C++ class designed to act like an array or vector, by viewing the memory of an existing array/vector without allocating anything. In my experience writing audio code, C++'s implicit copy constructors are what makes it too easy to accidentally allocate memory.
replies(1): >>24304570 #
138. ◴[] No.24299513{10}[source]
139. mwkaufma ◴[] No.24299554{4}[source]
Many core modern C++ types don't permit customizing the allocator. E.g. std::function
replies(2): >>24299572 #>>24301268 #
140. mwkaufma ◴[] No.24299572{5}[source]
Furthermore, C++ dependencies commonly instantiate types like std::vector with the default allocator internally, rather than exposing it to the host application.
replies(1): >>24300100 #
141. AsyncAwait ◴[] No.24299620{8}[source]
> It's hard to make a definitive argument one way or the other because the guarantees Rust makes come at a cost of compilation speed

I don't think this is actually true. The compiler is slow but not due to memory safety; the 'cargo check' command is rather quick and the compiler itself doesn't seem to spend a lot of time in the frontend, most of the time in spent in the backend, past the borrow checking phrase.

replies(1): >>24305338 #
142. voldacar ◴[] No.24299764{6}[source]
have fun trying to globally override new and delete in C++

(hint: there is no way to do this)

replies(1): >>24299945 #
143. aidenn0 ◴[] No.24299940{3}[source]
Valgrind on C is the closest thing I've worked with in terms of what Zig offers.

My experience is that Rust's approach is definitely better in terms of correctness than using valgrind during testing.

My intuition is that the advantages Zig brings to the table will not tip the balance.

That being said, the choice Zig makes is absolutely the right one. Rust fills the niche of a better and more correct C++ without fixing the issues of slow compilation and language complexity.

Zig fixes so much of what's wrong with C without abandoning the advantages of language simplicity and locality of reasoning. I love Zig, but need a medium sized non-work project for it.

144. cycloptic ◴[] No.24299945{7}[source]
Are you sure? https://en.cppreference.com/w/cpp/memory/new/operator_new#Gl...

You don't need to do this if you're using allocators.

145. defen ◴[] No.24300060{4}[source]
> In that case, the guarantees aren't as strong as Rusts, but those guarantees come at a significant cost -- to language complexity and compilation time -- that can also have a negative effect on correctness

How would those guarantees have a negative effect on correctness? Are you thinking something like, you need to design your data structure / program in a non-intuitive way that makes it more difficult to get the logic right, even though you are protected from memory safety issues?

replies(1): >>24302736 #
146. cycloptic ◴[] No.24300100{6}[source]
Thank you for the examples. I'm not sure std::function is a good comparison. After some research it seems this used to be in the spec, but it was removed because nobody supported it correctly and it seems it was too difficult to do it in a type-safe manner anyway: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p030...

The other thing is that Zig doesn't seem to have any real plans to support C++-style closures right now. If they ever find a type-safe way to do it while supporting custom allocators, then that would be interesting, but at the moment I wouldn't say it's any better than C++ in this regard.

I actually have seen some C/C++ libraries that do allow changing the default allocator although it's usually only low-level libraries that bother to do this.

replies(1): >>24359229 #
147. cycloptic ◴[] No.24300103{5}[source]
Thank you for the example.
148. fluffy87 ◴[] No.24300117{3}[source]
The actual Modern C++ design book from Alexander’s ci is about custom allocators and performance.

The irony.

149. aw1621107 ◴[] No.24300135{4}[source]
> but those guarantees come at a significant cost -- to language complexity and compilation time

Aren't Rust's compilation time woes more due to the amount of IR the front/middle-end give to LLVM? I was under the impression that the type system-related stuff isn't that expensive.

replies(1): >>24302545 #
150. pron ◴[] No.24300178{4}[source]
RAII is not a question. Zig prefers explicitness, i.e. by looking at a subroutine you know exactly which code is called, its approach to releasing resource uses defer.
replies(1): >>24300366 #
151. edflsafoiewq ◴[] No.24300366{5}[source]
The question is how to do automatic resource management. There are no checks of any sort, runtime or not, to help you here (correct me if I'm wrong).

RAII is not precluded by explicitness, you could require all values that require cleanup to be syntactically marked in some way and it would still be RAII. defer also cannot handle resources whose lifetimes do not correspond to nested scopes (eg. the elements of an ArrayList) like RAII or a GC can.

replies(2): >>24306357 #>>24359288 #
152. rhodysurf ◴[] No.24300368{4}[source]
This is exactly my experience working on CFD software with hydrodynamics phds haha they don’t care about the “little” things and will allocate and copy shit everywhere
153. vips7L ◴[] No.24300373{3}[source]
I've been slowly replacing the C files of xv6 with some zig. It was surprisingly easy with the generated header files to use zig from within the C part.
154. gautamcgoel ◴[] No.24300381{6}[source]
For a moment I thought Sam Altman was spending his time reading random HN forums and commenting on the intricacies of Rust coding. Took me a minute to catch the missing "l" in your handle ;)
replies(1): >>24301105 #
155. gautamcgoel ◴[] No.24300406[source]
Can you elaborate a bit more about what your found lacking in Zig strings?
replies(1): >>24304522 #
156. logicchains ◴[] No.24300472{8}[source]
I know personally I'd spend a lot less time on HN if C++ compiled quickly enough that I didn't have time to context switch to something else while waiting for it to compile.
157. vvanders ◴[] No.24300657{3}[source]
Yeah, I've been using no_std for this use case and pretty happy with it.

If you want a full blown container you can use heapless, building a custom container is really straightforward and requires a minimal amount of unsafe.

158. littlestymaar ◴[] No.24300895{12}[source]
> It is true that if you use unsafe Zig, i.e. turn off safety for a whole program or some sections of it, you lose the guarantees that safe Zig gives you, and unsafe Zig is indeed not safe

Their is no such things as unsafe and safe Zig. All Zig is unsafe, but you can add additional runtime checks (disabled by default in optimized builds) that will slow down your program when used. Using a specific allocator to detect UAF is something you may do in development, but almost surely never in production. And without it your code isn't memory-safe.

> Fuzzing a C program will not find all the undefined behaviour that fuzzing a Zig program can, for the reasons I mentioned.

Zig will have less UB than C, but there will still be lurking UB in your programs no matter how long you test it. Consider the following snippet (on mobile, so this may have stupid syntax errors):

  test "this is UB, but the test won't show it" {
    const allocator = std.heap.page_allocator;
    var buf = try allocator.alloc(u8, 10);
    ohNo(allocator, 42, buf);
    allocator.free(bar);
  }

  fn ohNo(allocator: *Allocator, foo: const u8, bar: *u8) void {
    if (foo == 1337) {
        // double free awaiting to happen in production
        allocator.free(bar);
    }
  }
If you never explicitly test the value “1337” during you debug session, you won't trigger the UB and you won't know it's here, then when you ship your optimized build in production, you'll ship a program with UB in it.
replies(1): >>24302852 #
159. samatman ◴[] No.24301105{7}[source]
haha I'm not sama and have been using this handle, and variants, for longer than he's been alive.

But you're not the first to think so!

160. tlb ◴[] No.24301268{5}[source]
However, most std::functions have a small built-in buffer for captured variables, like 4 pointers worth. If you limit yourself to only capturing that many, there's no allocation.
replies(1): >>24354807 #
161. avasthe ◴[] No.24301702{3}[source]
That is pretty common in low level domains. Rust instead comes with complexity of borrowck and lifetimes management no matter how rustaceans say it is second nature.
162. elcritch ◴[] No.24302037{8}[source]
Look at any sufficiently complex and/or low level C like Linux or FreeRTOS, and C text macros are used significantly. Some of the functionality would be horrible to implement otherwise. Being text based they're a pain, but like @gw, I'd have a hard time seeing a language like Zig without macros making a good low level system language. Maybe a good systems application like Kubernetes, similar to Go's niche, but not system kernels.
replies(1): >>24306962 #
163. Oreb ◴[] No.24302354{3}[source]
Is the https://ziglearn.org site up to date with the latest versions of the language and standard library? The initial "Hello, World" example fails to compile for me. I get:

Semantic Analysis [533/803] ./main.zig:4:14: error: container 'std.debug' has no member called 'print' std.debug.print("Hello, {}!\n", .{"World"});

replies(1): >>24313059 #
164. littlestymaar ◴[] No.24302545{5}[source]
The borrow-checking and ownership mechanism is cheap, and it's almost never a significant part of the big compilation time encountered in Rust.

What's not cheap, and is responsible for long compilation times (the order is arbitrary, the relative weights are highly dependent of the code-base):

- size of the code generation units (the whole crate vs individual files in C)

- procedural macros

- generics & traits

- interaction between generics & LLVM IR generation (a lot of bloat is created, to be removed by LLVM later)

- LLVM itself

Most of those are being worked on, but in the end it's mostly a social problem: as Rust users are used to long compile time, many of them don't especially take care of it, and most gains in the compiler are often outweighed by people writing slower code. It's already possible to write Rust code that compiles quickly, if you pay attention. The culture is evolving though, and more and more library authors are now mindful of compilation time of their crate (and the tooling to diagnose it is also improving).

Key takeaway: Memory safety isn't what makes Rust compile slowly, “zero-cost abstractions” is.

replies(2): >>24302711 #>>24303949 #
165. pron ◴[] No.24302711{6}[source]
> Memory safety isn't what makes Rust compile slowly, “zero-cost abstractions” is.

What one of Rust's designers told me when I asked him why they made the language so complicated is that nearly all of Rust's features exist to serve the borrow checker (except maybe macros). Once you have those features, and because Rust is a low-level language, you must have "zero-cost abstractions."

I don't know whether some other hypothetical low-language language could exist that gives you both sound compile-time memory safety guarantees as well as be a simple language that compiles quickly -- I would love to evaluate such a language, but we don't have one right now.

166. pron ◴[] No.24302736{5}[source]
Because a complex language is harder to read, and so slower to read and understand, and so to maintain over time without introducing bugs; compilation speed also slows you down, which means you write fewer tests. In general, getting a correct program requires effort. If that effort goes elsewhere, there's less of it for correctness.
167. pron ◴[] No.24302852{13}[source]
> Their is no such things as unsafe and safe Zig. All Zig is unsafe, but you can add additional runtime checks

Zig is meant to ultimately give you full memory safety, that you can selectively turn off. In addition, there are specific unsafe operations -- clearly marked -- such as casting an integer to a pointer or other non-typesafe casts.

A code unit with safety checks on and without unsafe operations is what I call "safe Zig."

> And without it your code isn't memory-safe.

This is simply not true. Perhaps you mean that you don't have a guarantee that your code is memory-safe, but that's not the same thing.

Our goal is not to write in a language with certain guarantees but to write programs with certain properties, say, without buffer overflows. One way of achieving such a program is to write it in a language that guarantees no such error can happen. Another is to write it in a language that guarantees no such error can happen in development, do some testing, and then remove the guarantees. In the second case it is true that our confidence in the lack of such errors is lower than the first, but in each case it is not 100%, and because the static guarantees are costly, it is possible that the second approach is even more effective at getting to more correct programs overall. They're both common ways for achieving the same goal.

As someone who works with formal methods, we do these tradeoffs in formal verification all the time. It is simple false that sound guarantees are always the best way to correctness -- it would be if they were free, but they're not.

Once you realise that the goal is achieving some desired level of confidence (which is never 100%, as that cannot exist in a physical system anyway) about overall program correctness -- which includes both "safety" and functional properties, each further divided into degrees of severity -- you see that there is no obvious way with the best effectiveness at achieving that goal.

> If you never explicitly test the value “1337” during you debug session, you won't trigger the UB and you won't know it's here

But here, again, you are looking at something in isolation. Because Zig is a simple language, the chances of such paths existing without you noticing are lower; also, because the language is simpler it is easier to write concolic testers that would automatically detect this.

In fact, if such a "rare path" exists in a complex language that causes some functional bug -- ultimately, we don't care what bug breaks our program or leaves it open to security vulnerabilities -- there's a smaller chance that it will be discovered. Which is exactly what I mean by soundness coming at a cost. It guarantees the lack of certain bugs, but because it complicates the language, it can make other bugs more costly to detect.

replies(1): >>24303545 #
168. jorangreef ◴[] No.24303341{5}[source]
Thanks for writing this.
169. littlestymaar ◴[] No.24303545{14}[source]
> This is simply not true. Perhaps you mean that you don't have a guarantee that your code is memory-safe, but that's not the same thing.

“But people can write correct C code”. Correct Zig != memory safety. It's the opposite: MEMORY SAFETY IS THE GUARANTEE that your code won't have memory error no matter how broken it is!

> Another is to write it in a language that guarantees no such error can happen in development, do some testing, and then remove the guarantees.

That's what the same kind of design as C is with ASan, TSan, MemSan etc. Yes Zig is less broken than C, leading to fewer sources of memory issues, but for what matters most (Double Free, Use After Free[1], Data Races) Zig and C offers the same level of safety guarantees: none.

> As someone who works with formal methods, we do these tradeoffs in formal verification all the time. It is simple false that sound guarantees are always the best way to correctness -- it would be if they were free, but they're not.

This is a straw man: comparing compile-time enforced ownership (Rust borrowck) to formal method doesn't make any more sense than comparing static typing to formal methods. It adds a lot of learning friction, but that's it. I just grepped my current 90kLoc rust project. You know how many lifetime annotation ('x) there is in it? Fifty-four! Which is one every 1666 lines. Please tell me again how much it cripples productivity and the ability to write correct code!

> Because Zig is a simple language, the chances of such paths existing without you noticing are lower;

If you ever try to use shared-memory parallelism, this kind of bugs will be everywhere! That's simple: every call to allocator.free is a minefield.

> ultimately, we don't care what bug breaks our program or leaves it open to security vulnerabilities

Memory safety issues aren't just security vulnerabilities, more than anything they are horrible bugs to track down, and it costs tons of money.

> Which is exactly what I mean by soundness coming at a cost. It guarantees the lack of certain bugs, but because it complicates the language, it can make other bugs more costly to detect.

This is BS. It's not because a language has few symbols or a simple syntax that it is easier to debug. Otherwise brainfuck would be the ultimate productivity tool. Semantic is what matters, and because it has UBs, Zig semantic is more complex than most languages out there. That's why C is one of the most complex language ever in practice, even if it's really “simple” and easy to “learn”.

Again, don't get me wrong, I have nothing against Zig and I find it refreshing because it has tons of cool ergonomic tricks (and having a built-in sanitizer which “just works” out of the box in debug mode without any other programmer intervention is cool!). It's a nice programming language experiment that will probably inspire a lot of others, and it's probably a really cool language for C programmers who like to manage their memory by themselves and don't want the “nany compiler” Rust has and still have a language with a modern look and feel: that's totally legit.

But memory safe, it isn't.

[1]: which cause more than 30% of Google and Microsoft security issues by itself! (https://www.zdnet.com/article/microsoft-70-percent-of-all-se... https://www.chromium.org/Home/chromium-security/memory-safet...)

replies(2): >>24304726 #>>24305249 #
170. pjmlp ◴[] No.24303949{6}[source]
With C++ I can get zero cost abstractions without Rust like compilation times, in spite C++ fame of slow compile times.

How?

By making heavy use of binary third party dependencies, every module gets its own binary library, no crazy use of metaprogramming, incremental compilation and linking.

My WinUI/UWP professional work compile in a fraction of my Gtk-rs toy applications.

I keep measuring improvements in this area, and hopefully Microsoft's own pain with Rust/WinRT might trigger some improvements.

replies(1): >>24304364 #
171. pjmlp ◴[] No.24303987{7}[source]
Just like you cannot guarantee safety of any Rust application with unsafe code blocks.
172. littlestymaar ◴[] No.24304364{7}[source]
> By making heavy use of binary third party dependencies

I use Rust because I need performance, then I compile my Rust code for the exact CPU instructions available on my target machine and with PGO, binary dependencies can't do that.

Also, binary third party come with a lot of hassle (compiler version & options used can break your build) so I'm really glad Rust took the source-code dependency route instead (at least by default).

You can use binary dependencies though, as long as you compile everything with the same compiler it will works.

replies(1): >>24307242 #
173. HourglassFR ◴[] No.24304522{3}[source]
Well there are no strings, only byte arrays. Now that's fine if you only pass bytes around in a stream, but if wan't to do any computation it you have to assume an encoding and basically anything outside of straight ASCII will be a pain.

Now you may argue that this can be handled nicely in the standard library without changing the language. This is correct, but there will be some frictions with string litterals.

replies(1): >>24329746 #
174. qppo ◴[] No.24304570{4}[source]
I'm not saying C++ is the best language out there but that smells like inexperience writing real-time safe code. The problem isn't implicit copy constructors but implicit copies in your code.
replies(1): >>24308234 #
175. HourglassFR ◴[] No.24304652{3}[source]
> Standard library documentation is indeed clunky as it is auto-generated for the most part.

I have not expressed myself clearly: the auto-generated documentation is severly lacking. The API of the standard library is clunky. To be fair, both those points are getting better. And yes, the language is very young and I understand that there are more pressing issues with the core language itself.

> I personally like that Zig doesn't bother with "strings" at a language level at all and just considers everything as arrays of bytes. String handling is a complexity nightmare and I feel that Zig wisely chooses to be simple instead.

It is definitely simpler, alas not everything is ASCII and arguing it should be to make life easy for programers is hardly a reasonable stance.

Also, maybe it is not clear in my comments but I actually enjoy Zig.

176. ◴[] No.24304726{15}[source]
177. littlestymaar ◴[] No.24304961{4}[source]
(This was a reference to the Go language, which after a decade saying “generics aren't needed” and even “the lack of generics is a feature”, are eventually shoehorning them in the language in their Go2 campaign.)
replies(1): >>24329077 #
178. pron ◴[] No.24305249{15}[source]
> Correct Zig != memory safety. It's the opposite: MEMORY SAFETY IS THE GUARANTEE that your code won't have memory error no matter how broken it is!

I think you have a missing piece of factual information here. Safe Zig (which is not "correct Zig") guarantees (or will guarantee) memory safety everywhere, no matter how broken the code is, as long as you don't use unsafe operations -- just as in Rust. Instead of eliminating some issues at compile time, it does so by panicking at runtime.

> Please tell me again how much it cripples productivity and the ability to write correct code!

If you think that you -- and your 20-person team maintaining a project for 20 years -- can be as productive in Rust as you can in Zig, then Rust is for you. That's not the case for me (maybe it's not a universal thing) and I don't think that would be the case for my team. Personally, I think that universal truths in programming are rare, and I think it is very likely that Rust might be more effective for some and Zig more effective for others, even if you only consider correctness. I'm not trying to convince you that Zig is better than Rust for you; I'm just saying that Zig is better than Rust for me.

> Zig and C offers the same level of safety guarantees: none.

I'm afraid you're simply mistaken, and repeating the same assertion over and over does not make it more correct. Safe Zig will give you the same guarantees as safe Rust except data races.

Once you turn safety off, you don't have a guarantee but you also don't have anywhere near the same level of confidence as you do in the safety of the C program. So the choice is not between 99.999999% confidence of a guarantee and, say, 50%, but there's lots in between, and Zig is in the vicinity of where Rust is -- don't know if better or worse -- but is much better than where C is. Correctness is simply not a binary position.

You accept this position yourself: when you run your Rust program, you also have no guarantees about the overall functional correctness of the program. You still don't think that you're in the same position as everyone else with no such guarantees, right? That's because there are lots of other activities needed to be done to increase confidence in correctness, and so there can be a very, very wide range of correctness within that "no guarantee" which is where we all are in most cases. I think that Zig makes some of those activities easier than C++/Rust, at least for me.

> But memory safe, it isn't.

Except it is, actually (or, rather, will be) because it guarantees no memory safety errors.

Anyway, thank you for your insight. I've been a professional programmer for nearly 25 years, working on large, long-running projects, some of which are very safety critical (many people would die on failure), some employing formal verification, and it is my opinion that Zig's approach to safety is at least as good as Rust's. It's certainly possible that my opinion is shaped by my personal experience. My software would have killed people either due to a buffer overflow or due to an incorrect logic. Sacrificing things that for me would increase the effort to prevent the latter only to increase my confidence that I don't have faults of the former kind from 99.99% to 99.99999999% doesn't seem like a good tradeoff.

While your opininon to the contrary is just as legitimate, barring empirical evidence, you won't be able to convince me. That the most effective approach to increasing correctness is by eliminating an important class of bugs at the significant cost of language complexity and that there is no more effective approach is an interesting hypothesis, but one that is far from being established. I understand why some might believe it to be true, and also why some believe it to be false. Ultimately, safety and correctness are central design goals for both Zig and Rust, but they each make different tradeoffs to achieve what they consider to be a good sweet spots. Not having any definitive evidence over which is "better" in that regard, we must make our languages choices based on other criteria.

replies(1): >>24308448 #
179. pron ◴[] No.24305338{9}[source]
https://news.ycombinator.com/item?id=24302711
180. pron ◴[] No.24306357{6}[source]
> RAII is not precluded by explicitness, you could require all values that require cleanup to be syntactically marked in some way and it would still be RAII.

I think this one is TBD.

> defer also cannot handle resources whose lifetimes do not correspond to nested scopes (eg. the elements of an ArrayList) like RAII or a GC can.

Yep, defer won't work if there is no known lifetime scope, but I think this one is actually a good tradeoff to make in a low-level language. Don't get me wrong -- I love tracing GCs and think that they're the right choice for the vast majority of application software, plus there have been great strides made in GC capabilities in the past few years, but in the domains where low-level languages are appropriate there is a different set of constraints. Low-level programming is not like high-level programming, and IMO it's wrong to even try to make them look alike.

181. pron ◴[] No.24306962{9}[source]
The goal isn't to have macros but to be able to do what macros are used for. Zig has found a different and simpler way to do what macros do. comptime gives you generic types, typeclasses/concepts, typesafe printf, conditional compilation and much more, all without macros and with a simpler construct than macros.
replies(1): >>24311923 #
182. pjmlp ◴[] No.24307242{8}[source]
Except that doesn't work for the business of selling binary libraries for mobile and mainstream desktop OSes.

The only hassle is not wanting to learn how to use compiled languages properly, that is how we end up with the brain dead idea of header only libraries.

Regarding performance, Rust still needs to catch up with C++ in many domains.

There are plenty of reasons why C++ is my to go language outside my managed language options, despite my appreciation for Rust, and C++'s caveats of copy-paste compatibility with C.

183. nyanpasu64 ◴[] No.24308234{5}[source]
> The problem isn't implicit copy constructors but implicit copies in your code.

I don't understand what's the difference.

I'm not an expert in writing real-time safe code, but I've spent close to a year working on allocation-free programming. Implicit copies of structs (value types) are as fast as implicit copies of integers, but implicit copies of types with owned heap memory invoke the copy constructor, which calls into the allocator. Or did I misunderstand your comment or get anything wrong?

replies(1): >>24308972 #
184. littlestymaar ◴[] No.24308448{16}[source]
Actually what I don't understand is how you can at the same time:

- estimate that memory safety isn't paramount (you're not alone in this case, and it's usually the kind of discussion that happens over and over on Rust threads).

- and use the “Zig is memory-safe” as a marketing argument.

“Zig isn't memory safe but we think it's not what matters” is something I'm willing to hear even though I usually make a different trade-off, I know a ton of C programmers who are fine with those languages not being memory-safe. But redefining the definition of memory safety[1] so that Zig can fit in, while at the same time arguing that memory safety is just a detail in the grand scheme of things, is just incomprehensible to me.

Also, your writing in this whole thread is full of resentment towards Rust and I don't think your animosity is helpful in any way: among the really tiny group of people who are currently experimenting with Zig there are people who actually love Rust and use it every day, I'm one of them and I know I'm not the only one. Bashing other language in a thread about a language you contribute to isn't the best way to be a welcoming community.

Good day.

[1] “Zig is (will be) memory safe”*

*as soon as you disable some compiler optimizations and use an allocator which is actually a memory-management runtime but not totally a GC because you need to free things yourself and if you make a mistake it will abort your program. Terms and conditions apply.

replies(1): >>24310985 #
185. qppo ◴[] No.24308972{6}[source]
Sorry I worded that poorly, what I meant is that you shouldn't be writing code where the fact an implicit copy constructor allocates is a concern at all, because C++ has very clear copy semantics.

With a handful of exceptions you essentially will never need to worry about this if you restrict assignment and argument passing of non-POD types to references within the critical code blocks. And in the case that you do need to worry about it, the copy constructor should be explicitly deleted anyway.

It's a footgun to be sure, but it's not a serious one with a bit of discipline. If you're used to doing real-time safe programming, you'll get paranoid about code that could invoke an allocator (or write your own).

replies(1): >>24309210 #
186. nyanpasu64 ◴[] No.24309210{7}[source]
> if you restrict assignment and argument passing of non-POD types to references within the critical code blocks.

It's a working strategy, but if you forget the `&` in `auto & x = document.foo; auto & y = x.field;` a single time, it might silently invoke the allocator. What actually happened to me was that it crashed because I returned a reference to a stack variable copied by mistake, when I meant to type a `&`. Pointers are probably less prone to accidental copying, but they have uglier dereference syntax and are nullable (excluding custom types).

Ever since that incident, I've been paranoid that I accidentally forgot the reference in another spot in the code. A few days ago, I debugged the code and set a breakpoint on malloc in the audio thread (LLDB crashes when listing threads, Visual Studio works) and found out my current codebase doesn't allocate on the audio thread. I hope I don't introduce any allocations.

To avoid this footgun, objects could be only copyable through an explicit `clone()` method like in Rust (which breaks std::vector<explicit_clone>), or by marking copy constructors as explicit (which you can't do to a std::vector).

replies(1): >>24309905 #
187. qppo ◴[] No.24309905{8}[source]
Most of this is solved by code review, unit testing, custom allocators, and if you really want rust-like guarantees, type traits. In this case std::is_trivially_copyable and std::is_trivially_destructible. Like you don't need an explicit .clone() method, you static_assert that all real-time-safe code only touches structs that are trivially copyable/destructible and write a custom allocator with optional checks to see if it's invoked in a real time context, and write a unit test to stress it. There are some places where this doesn't work, but they're obvious and pretty straightforward to handle. Ideally you'd have a slab allocator with constant time alloc/free while locked in the real time context that cleans itself up for real on resetting the system.

Really though, you shouldn't have an owned STL instance like std::vector near your real time code to begin with. You're seeing one of the reasons people writing performant code don't use the STL at all, even if it has gotten up to par with handrolled solutions in certain benchmarks.

188. pron ◴[] No.24310985{17}[source]
1. Safe Zig is(/will be) memory safe. It is not what you deploy, but it is one interesting aspect of Zig's design which makes it very different from "C with ASAN."

2. Nobody's goal is to use a memory-safe language. The goal is a safe and correct program. A memory-safe language is one way towards memory-safe programs, but the belief that this is the most effective way to achieve safe and correct programs is an interesting hypothesis, but it is just that. It is certainly possible that trading off those guarantees for something else -- like increased confidence across the board -- could result in safer, more correct programs, or at least not any worse.

> Also, your writing in this whole thread is full of resentment towards Rust and I don't think your animosity is helpful in any way

If you go over this thread again you will see that this is simply not true. I certainly did not bash Rust once. It is a very cool language with a bright future that many people love, but it just doesn't suit my personal tastes, that's all. And because both Rust and Zig are trying to provide safe low-level programming and they do it radically different ways comparing their approaches to safety is interesting. If my points are incomprehensible to you that might be because it is you who are acting with animosity and resentment that cloud your judgement.

189. elcritch ◴[] No.24311923{10}[source]
After reading your previous comment, I read a bit more about comptime. It does seem to be able to handle most of the cases I could think of wanting macros for, and it is a nice UX in that it's really like "bounded macros" and prevents building arbitrary new semantics. Though thinking on two levels in a given function does seem tricky to me, but perhaps that's just familiarity. Personally, I kind of like having separate macro vs code. Comptime seems like it'd almost be a better fit for Go than their generics proposal.
replies(1): >>24320353 #
190. smaddox ◴[] No.24312605[source]
I've been a financial backer of Zig for several months now, and plan to continue, because Andrew and the other contributers are pushing language design in directions that no other language is.

That being said, Zig's comptime is not a proper replacement for typeclasses/traits. Zig can do both comptime duck typing and vtable-based dispatch, but it cannot do proper bounded polymorphism type checking. It always fully evaluates types before type checking them. This might make it difficult (or impossible?) to provide type checking error messages of similar quality to Rust. I'm not sure if there are any other practical consequences for realistic programs, though. I suspect there might be issues around interface stability gaurantees, though I can't quite put my finger on why.

replies(1): >>24317201 #
191. pfg_ ◴[] No.24313059{4}[source]
It's the opposite - you have version 0.6.0, but the hello world there is for the latest master version of zig which can be downloaded at https://ziglang.org/download
192. pron ◴[] No.24317201{3}[source]
Right, they are only approximately comparable, but the important thing to remember is that features of formalisms are never goals in themselves, but rather means to various ends. The goal is never "to have interfaces/typeclasses/traits" but to be able to specify an algorithm that works for a variety of data structures with shared properties. Moreover, I think it would be a mistake for Zig library authors to try and replicate styles used in other languages. Zig provides sufficient mechanisms for expressing programs, and it will develop its own style. There will be elements that are analogous to those in other lanugages, but not identical.

Having said that, I do support a proposal for specifying a type at the parameter declaration with some type -> bool function.

I like this quote by Leslie Lamport about comparing formalisms (he talks about specification languages rather than programming languages, but the sentiment is the same):

> Comparisons between radically different formalisms tend to cause a great deal of confusion. Proponents of formalism A often claim that formalism B is inadequate because concepts that are fundamental to specifications written with A cannot be expressed with B. Such arguments are misleading. The purpose of a formalism is not to express specifications written in another formalism, but to specify some aspects of some class of computer systems. Specifications of the same system written with two different formalisms are likely to be formally incomparable… Arguments that compare formalisms directly, without considering how those formalisms are used to specify actual systems, are useless.

193. pron ◴[] No.24320353{11}[source]
The beauty of comptime is that, unlike with macros, you don't need to think on two levels. The semantics is the same as if everything were done at runtime. To read a comptime function you can completely ignore the distinction between compilation time and runtime. To write it you need to know that some operations are only available at compile-time.

See my comment here about "Zig' ": https://news.ycombinator.com/item?id=24293611

Perhaps now you see what I meant when I said that Zig's simplicity hides its radical design.

replies(1): >>24409486 #
194. nulltype ◴[] No.24329077{5}[source]
I think they've been saying "We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it." since 2013: https://web.archive.org/web/20130410000959/https://golang.or...
replies(1): >>24376146 #
195. LakeByTheWoods ◴[] No.24329746{4}[source]
What frictions do you anticipate? String literals in zig are utf8 encoded.
196. mwkaufma ◴[] No.24354807{6}[source]
I'm not saying that the default allocation strategy isn't good (in the general case), just that it's not customizable (for special needs).
197. int_19h ◴[] No.24359229{7}[source]
C++-style closures are unrelated to custom allocators, since they're not heap-allocated.
replies(1): >>24401211 #
198. int_19h ◴[] No.24359288{6}[source]
> defer also cannot handle resources whose lifetimes do not correspond to nested scopes

It can, it's just more explicit about it. In a language with destructors, you'd do RAII here by having a list destructor that cleans up each element in turn. In a language with defer, the same destructor becomes a regular function that you'd invoke in the deferred expression.

replies(2): >>24379048 #>>24379109 #
199. int_19h ◴[] No.24359314{5}[source]
How is C# similar to C++ in that regard? It's a much higher-level language.
200. wtetzner ◴[] No.24376146{6}[source]
That's what they've been saying, but I don't buy it.

I mean, the workarounds are horrible code generation tools and reflection. How were those ever not considered to be more complex than generics?

201. ◴[] No.24379048{7}[source]
202. edflsafoiewq ◴[] No.24379109{7}[source]
The issue is with the individual elements, which get pushed to the list, popped off the list, moved around, pushed to a different list, etc.

The other issue is since there is no generic notion of a destructor, it isn't possible to write generic functions that destroy elements. If you call, say, replace_range on a list of strings, it will leak the replaced strings.

replies(1): >>24404277 #
203. mwkaufma ◴[] No.24401211{8}[source]
You are correct that the type the compiler creates for a lambda is allocated in-place, usually on the stack, and perform no heap-allocs. However if you pass it to a std:: function it will be _boxed_ and std::function _will_ heap alloc the space for it. This alloc is what's not customizable.
replies(1): >>24404323 #
204. int_19h ◴[] No.24404277{8}[source]
> The issue is with the individual elements, which get pushed to the list, popped off the list, moved around, pushed to a different list, etc.

That's separate from the destructor for the entire list. It does mean that the code that removes an element from the list has to explicitly invoke the destructor for it - which is in agreement with using "defer" to explicitly invoking destructors for locals.

> The other issue is since there is no generic notion of a destructor, it isn't possible to write generic functions that destroy elements.

But you can have a generic notion of a destructor - that's orthogonal to whether destructors are invoked explicitly. You just have an interface (or trait, or whatever it's called) that exposes a destructor method for a type.

replies(1): >>24438025 #
205. int_19h ◴[] No.24404323{9}[source]
That's fair, but std::function is not specifically about lambdas (As Boost.Function, it predates them, in fact) - it's about wrapping an arbitrary callable in a way that allows erasing its type. Idiomatic C++ rarely uses that class - I don't think it's used anywhere else in the standard library, even though it has plenty of higher-order functions etc. Turns out that closures that can only be passed in and not returned are still plenty useful.
replies(1): >>24435168 #
206. mratsim ◴[] No.24409486{12}[source]
In Nim the equivalent is using a `static:` block, then everything inside has normal Nim syntax but evaluated at compile-time.

You can also do `const a = static(foo(x, y, z))` to force normal function to be evaluated at compile-time and store them in a constants.

Hence you don't need to use macros for compile-time evaluation in Nim just like in Zig. However macros are necessary for AST manipulation.

207. mwkaufma ◴[] No.24435168{10}[source]
In my dayjob as a code-reviewer, I see it in code-bases a lot. Between the generic name and elevated status in the std namespace, it's a natural tool for developers to reach for, across experience-levels. I speculate that the boxing side-effects are not well understood, given how many times I have to lift them out of hot-loops (despite the many unverified claims that "oh, LLVM will inline and optimize that away, no worries, teehee").

In general, I have not observed a consensus for 'idiomatic' C++, even within a single project. I say this as someone who wishes there was, because my job would be a lot easier if dependencies were less heterogeneous :)

208. edflsafoiewq ◴[] No.24438025{9}[source]
You could but Zig does not.