←back to thread

Open-source Zig book

(www.zigbook.net)
692 points rudedogg | 2 comments | | HN request time: 0.519s | source
Show context
shuraman7 ◴[] No.45948508[source]
It's really hard to believe this isn't AI generated, but today I was trying to use the HTTP server from std after the 0.15 changes, couldn't figure out how it's supposed to work until I've searched repos in Github. LLM's couldn't figure it out as well, they were stuck in a loop of changing/breaking things even further until they arrived at the solution of using the deprecated way. so I guess this is actually handwritten which is amazing because it looks like the best resource I've seen up until now for Zig
replies(2): >>45948572 #>>45948933 #
blks ◴[] No.45948933[source]
> It's really hard to believe this isn't AI generated

Case of a person who is relying on LLMs so much he cannot imagine doing something big by themselves.

replies(1): >>45948985 #
shuraman7 ◴[] No.45948985[source]
it's not only the size - it was pushed all at once, anonymously, using text that highly resembles that of an AI. I still think that some of the text is AI generated. perhaps not the code, but the wording of the text just reeks of AI
replies(2): >>45949190 #>>45950261 #
BlackjackCF ◴[] No.45949190[source]
Can you provide some examples where the text reeks of AI?
replies(3): >>45949233 #>>45950095 #>>45950142 #
dilap ◴[] No.45950095[source]
I read the first few paragraphs. Very much reads like LLM slop to me...

E.g., "Zig takes a different path. It reveals complexity—and then gives you the tools to master it."

If we had a reliable oracle, I would happily bet a $K on significant LLM authorship.

replies(1): >>45951635 #
1. sgt ◴[] No.45951635[source]
Yeah and then why would they explicitly deny it? Maybe the AI was instructed not to reveal its origin. It's painful to enjoy this book if I know it's likely made by an LLM.
replies(1): >>45954407 #
2. dilap ◴[] No.45954407[source]
If you find it useful no harm in enjoying it! The main problem with AI content is it's just not good enough...yet. It'll get there. The LLMs just need more real-world feedback incorporated, rather than being the ultimate has-read-everything,-actually-knows-nothing dweeb (a lot of humans are like this too). (You can see the first signs of overcoming this w/ latest models coding skills, which are stronger via RL, I believe.) (Not first hand knowledge tho -- pot kettle black situation there.)