Most active commenters
  • maxbond(11)
  • Rochus(6)
  • (4)
  • ants_everywhere(4)
  • jasonjmcghee(3)
  • ModernMech(3)

←back to thread

Open-source Zig book

(www.zigbook.net)
692 points rudedogg | 56 comments | | HN request time: 2.289s | source | bottom
1. jasonjmcghee ◴[] No.45948044[source]
So despite this...

> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.

I just don't buy it. I'm 99% sure this is written by an LLM.

Can the author... Convince me otherwise?

> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.

> Welcome to the Zigbook. Your transformation starts now.

...

> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.

...

> This is not about memorizing syntax. This is about earning mastery.

replies(13): >>45948094 #>>45948100 #>>45948115 #>>45948220 #>>45948287 #>>45948327 #>>45948344 #>>45948548 #>>45948590 #>>45949076 #>>45949124 #>>45950417 #>>45951487 #
2. PaulRobinson ◴[] No.45948094[source]
You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around.

Can I also ask: so what if it is or it isn't?

While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.

replies(1): >>45948343 #
3. Rochus ◴[] No.45948100[source]
Who cares?

Still better than just nagging.

replies(2): >>45948284 #>>45950274 #
4. rudedogg ◴[] No.45948115[source]
I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.
5. gamegoblin ◴[] No.45948220[source]
Pangram[1] flags the introduction as totally AI-written, which I also suspected for the same reasons you did

[1] one of the only AI detectors that actually works, 99.9% accuracy, 0.1% false positive

replies(1): >>45950020 #
6. maxbond ◴[] No.45948284[source]
Using AI to write is one thing, claiming you didn't when you did should be objectionable to everyone.
replies(2): >>45948310 #>>45948398 #
7. simonklee ◴[] No.45948287[source]
It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI.

It doesn't take away from the fact that someone used a bunch of time and effort on this project.

replies(2): >>45948329 #>>45949675 #
8. Rochus ◴[] No.45948310{3}[source]
Who wants to be so petty.

I'm sure there are more interesting things to say about this book.

replies(1): >>45948345 #
9. ◴[] No.45948327[source]
10. jasonjmcghee ◴[] No.45948329[source]
To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places.
replies(1): >>45948340 #
11. simonklee ◴[] No.45948340{3}[source]
I agree with you. After reading a couple of the chapters I'd be surprised if this wasn't written by an LLM.
12. maxbond ◴[] No.45948343[source]
You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text.

As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?

replies(1): >>45948873 #
13. the-anarchist ◴[] No.45948344[source]
Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human.
replies(3): >>45948624 #>>45948630 #>>45952441 #
14. maxbond ◴[] No.45948345{4}[source]
So petty as to lie about using AI or so petty as to call it out? Calling it out doesn't seem petty to me.

I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty?

replies(2): >>45948434 #>>45948497 #
15. littlestymaar ◴[] No.45948398{3}[source]
This.

I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.

But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?

16. ◴[] No.45948434{5}[source]
17. chris_pie ◴[] No.45948548[source]
I don't think so, I think it's just a pompous style of writing.
18. CathalMullan ◴[] No.45948590[source]
Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725
replies(1): >>45948724 #
19. maxbond ◴[] No.45948605{6}[source]
I understand being okay with a book being generated (some of the text I published in this manual [1] is generated), I can imagine not caring that the author lied about their use of AI, but I really don't understand the suggestion I write a book about a subject I just told you I'm clueless about. I feel like there's some kind of epistemic nihilism here that I can't fathom. Or maybe you meant it as a barb and it's not that deep? You tell me I guess.

[1] https://maxbondabe.github.io/attempt/intro.html

replies(1): >>45948869 #
20. lukan ◴[] No.45948624[source]
But then you cannot write that

"The Zigbook intentionally contains no AI-generated content—it is hand-written"

21. tredre3 ◴[] No.45948630[source]
> Doesn't mean that the author might not use AI to optimise legibility.

I agree that there is a difference between entirely LLM-generated, and LLM-reworded. But the statement is unequivocal to me:

> The Zigbook intentionally contains no AI-generated content—it is hand-written

If an LLM was used in any fashion, then this statement is simply a lie.

replies(1): >>45951528 #
22. smj-edison ◴[] No.45948724[source]
After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though.
23. Rochus ◴[] No.45948869{7}[source]
I would rather care whether there is a book at all and whether it is useful.

> I write a book about a subject I just told you I'm clueless about

Use AI. Even if you use AI, it's still a lot of work. Or write a book about why people shouldn't let AI write their books.

replies(1): >>45948977 #
24. geysersam ◴[] No.45948873{3}[source]
> but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest ai generated text

Why? Didn't people use such constructions frequently before AI? Some authors probably overused them the same frequency AI does.

replies(1): >>45949021 #
25. maxbond ◴[] No.45948977{8}[source]
I'm also concerned whether it is useful! That's why I'm not gunnuh read it after receiving a strong contrary indicator (which was less the use of AI than the dishonesty around it). That's also why I try to avoid sounding off on topics I'm not educated in (which is too say, why I'm not writing a book about Zig).

Remember - I am using AI and publishing the results. I just linked you to them!

replies(1): >>45949037 #
26. maxbond ◴[] No.45949021{4}[source]
I don't think there was very much abuse of "not just A, but B" before ChatGPT. I think that's more of a product of RLHF than the initial training. Very few people wrote with the incredibly overwrought and flowery style of AI, and the English speaking Internet where most of the (English language) training data was sourced from is largely casual, everyday language. I imagine other language communities on the Internet are similar but I wouldn't know.

Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing?

I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically.

Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born?

[1] https://poignant.guide/book/chapter-1.html

replies(1): >>45949406 #
27. Rochus ◴[] No.45949037{9}[source]
> I'm also concerned whether it is useful!

So you could do everyone a favour by giving a sufficiently detailed review, possibly with recommendations to the author how to improve the book. Definitely more useful than speculating about the author's integrity.

replies(1): >>45949054 #
28. maxbond ◴[] No.45949054{10}[source]
I'm satisfied with what's been presented here already, and as someone who doesn't know Zig it would take me several weeks (since I would have to learn it first), so that seems like an unreasonable imposition on my time. But feel free to provide one yourself.
replies(1): >>45949143 #
29. ants_everywhere ◴[] No.45949076[source]
IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules.

Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.

replies(1): >>45949601 #
30. NoboruWataya ◴[] No.45949124[source]
> Can the author... Convince me otherwise?

Not disagreeing with you, but out of interest, how could you be convinced otherwise?

replies(3): >>45949313 #>>45950516 #>>45950868 #
31. Rochus ◴[] No.45949143{11}[source]
Well, there must have been a good reason why you don't like the book. I didn't see good reasons in this whole discussion so far, just a lot of pedantry. No commenter points to technical errors, inaccuracies, poor code examples, or pedagogical problems. The entire objection rests on subjective style preferences and aesthetic nitpicking rather than legitimate quality concerns.
replies(2): >>45949176 #>>45949191 #
32. ◴[] No.45949176{12}[source]
33. maxbond ◴[] No.45949191{12}[source]
I don't see what else I can say to help you understand. I think we just have very different values and world views and find one another's perspective baffling. Perhaps your preferred AI assistant, if directed to this conversation, could put it in clearer terms than I am able to.
34. jasonjmcghee ◴[] No.45949313[source]
I'm not sure, but I try my best to assume good faith / be optimistic.

This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.

If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc

replies(1): >>45949474 #
35. PaulRobinson ◴[] No.45949406{5}[source]
It's been alleged that a major source of training data for many LLMs was libgen and SciHub - hardly casual.
replies(1): >>45949477 #
36. chris_pie ◴[] No.45949474{3}[source]
note that the front page also says: "61 chapters • Project-based • Zero AI"
37. maxbond ◴[] No.45949477{6}[source]
Even if that were comparable in size to the conversational Internet, how many novels and academic papers have you read that used multiple "not just A, but B" constructions in a single chapter/paper (that were not written by/about AI)?
38. ModernMech ◴[] No.45949601[source]
What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so.

One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.

If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.

replies(1): >>45949747 #
39. gre ◴[] No.45949675[source]
Did they actually spend a bunch of time and effort though? I think you could get an llm to generate the entire thing, website and all.

Check out the sleek looking terminal--there's no ls, cd, it's just an ai hallucination.

40. ants_everywhere ◴[] No.45949747{3}[source]
It's 2025, people are going to use technology and its use will spread.

There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.

It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.

replies(1): >>45949823 #
41. ModernMech ◴[] No.45949823{4}[source]
I'm 100% okay with AI spreading. I use it every day. This isn't a matter of an ideological battle against AI, it's a matter of fraudulent misrepresentation. This wouldn't be a discussion if the author themselves hadn't claimed what they had, so I don't see why the community should be barred from calling that out. Why bother having curious discussions about this book when they are blatantly lying about what is presented here? Here's some curiosity: what else are they lying about, and why are they lying about this?
replies(1): >>45949882 #
42. ants_everywhere ◴[] No.45949882{5}[source]
To clarify there is no evidence of any lying or fraud. So far all we have evidence of is HN commenters assuming bad faith and engaging in linguistic phrenology.
replies(1): >>45949911 #
43. ModernMech ◴[] No.45949911{6}[source]
There is evidence, it's circumstantial, but there's never going to be 100% proof. And that's the point, that's why community detection is the best weapon we have against such efforts.
replies(2): >>45949979 #>>45950006 #
44. ◴[] No.45949979{7}[source]
45. maxbond ◴[] No.45950006{7}[source]
(Nitpick: it's actually direct evidence, not circumstantial evidence. I think you mean it isn't conclusive evidence. Circumstantial evidence is evidence that requires an additional inference, like the accused being placed at the scene of the crime implying they may have been the perpetrator. But stylometry doesn't require any additional inference, it's just not foolproof.)
46. ants_everywhere ◴[] No.45950020[source]
Keep in mind that pangram flags many hand-written things as AI.

> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.

> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.

> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.

> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.

I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.

https://www.reddit.com/r/teachingresources/comments/1icnren/...

replies(2): >>45950289 #>>45950887 #
47. Rochus ◴[] No.45950274[source]
My statement refers to this claim: "I'm 99% sure this is written by an LLM."

The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.

replies(1): >>45950505 #
48. gamegoblin ◴[] No.45950289{3}[source]
Weird to me that nobody ever posts the actual alleged false positive text in these criticisms

I've yet to see a single real Pangram false positive that was provably published when it says it was, yet plenty such comments claiming they exist

49. keyle ◴[] No.45950417[source]
I wish AI had the self-built irony of adding vomit emojis to their sycophantic sentences.
50. maxbond ◴[] No.45950505{3}[source]
I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging.

Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.

51. ummonk ◴[] No.45950516[source]
Git log / draft history
52. Jach ◴[] No.45950868[source]
To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity.
53. agucova ◴[] No.45950887{3}[source]
How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.
54. ninetyninenine ◴[] No.45951487[source]
The sweet irony of this post is that this very post itself is written by an LLM.
55. mrob ◴[] No.45951528{3}[source]
>If an LLM was used in any fashion, then this statement is simply a lie.

While I don't believe the article was created this way, it's possible to use an LLM purely as a classifier. E.g. prompt along the lines of "Does this paragraph contain any errors? Answer only yes or no." and generate only a single set of token probabilities, without any autoregression. Flag any paragraphs with sufficient probability of "yes" for human review.

56. blt ◴[] No.45952441[source]
Clarity in writing comes mostly from the logical structure of ideas presented. Writing can have grammar/style errors but still be clear. If the structure is bad after translation, then it was bad before translation too.