Most active commenters

    ←back to thread

    Open-source Zig book

    (www.zigbook.net)
    692 points rudedogg | 11 comments | | HN request time: 0.201s | source | bottom
    Show context
    eibrahim ◴[] No.45949622[source]
    So many comments about the AI generation part. Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you.

    I am just impressed by the quality and details and approach of it all.

    Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)

    replies(13): >>45949682 #>>45949740 #>>45949749 #>>45949796 #>>45950174 #>>45950334 #>>45950401 #>>45950481 #>>45951133 #>>45951367 #>>45951767 #>>45953838 #>>46094809 #
    gassi ◴[] No.45949749[source]
    > Why does it matter?

    Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.

    replies(2): >>45949854 #>>45950125 #
    1. DrNosferatu ◴[] No.45949854[source]
    Humans get things wrong too.

    Quality prose usually only becomes that after many reviews.

    replies(5): >>45949983 #>>45950389 #>>45950602 #>>45950870 #>>45951344 #
    2. gassi ◴[] No.45949983[source]
    AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.
    3. righthand ◴[] No.45950389[source]
    That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.
    replies(1): >>45951284 #
    4. loeg ◴[] No.45950602[source]
    AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.
    replies(1): >>45951127 #
    5. ares623 ◴[] No.45950870[source]
    Fortunately, we can't just get rid of humans (right?) so we have to use them _somehow_
    6. rootlocus ◴[] No.45951127[source]
    Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.
    replies(1): >>45951409 #
    7. maxbond ◴[] No.45951284[source]
    That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.
    replies(1): >>45955934 #
    8. DrNosferatu ◴[] No.45951344[source]
    If AI is used by “fire and forget”, sure - there’s a good chance of slop.

    But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.

    replies(1): >>45951399 #
    9. littlestymaar ◴[] No.45951399[source]
    Absolutely.

    But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?

    Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.

    10. littlestymaar ◴[] No.45951409{3}[source]
    A serious one yes.

    But why would a serious person claim that they wrote this without AI when it's obvious they used it?!

    Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.

    11. righthand ◴[] No.45955934{3}[source]
    The point was to bookend your human review around automated. Not stamp out a blueprint.