I'd love it if we can stop the "Oh, this
might be AI, so it's probably crap" thing that has taken over HN recently.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.