←back to thread

135 points barddoo | 1 comments | | HN request time: 0s | source

Writing Redis from scratch in Zig.
Show context
johnisgood ◴[] No.45308123[source]
Seems like LLMs are getting good at Zig (with some help, I presume).
replies(2): >>45308193 #>>45308553 #
mtlynch ◴[] No.45308193[source]
Is there anything about this project that seems LLM-generated?

I've found that LLMs are particularly bad at writing Zig because the language evolves quickly, so LLMs that are trained on Zig code from two years ago will write code that no longer compiles on modern Zig.

replies(4): >>45308296 #>>45308429 #>>45308798 #>>45311161 #
rmonvfer ◴[] No.45308429[source]
As an avid Claude Code user, I can tell you with 99% probability, that README is LLM-generated. This exactly the same structure and wording used by Claude (of course, it has some human modification because otherwise I’d filled with emojis)

In my experience, when you work with something like agentic development tools, you describe your goals and give it some constraints like “use modern zig”, “always run tests”… and when you ask it to write a README, it will usually reproduce those constraints more or less verbatim.

The same thing happens with the features section, it reads like instructions for an LLM.

I might be wrong but I spend way too much time using Claude, Gemini, Codex… and IMHO it’s pretty obvious.

But hey, I don’t think it’s a problem! I write a lot of code using LLMs, mostly for learning (and… ejem, some of it might end up in production) and I’ve always found them great tools for learning (supposing you use the appropriate context engineering and make sure the agent has access to updated docs and all of that). For example, I wanted to learn Rust so I half-vibed a GPUI-based chat client for LLMs that works just fine and surprisingly enough, I actually learned and even had some fun.

replies(3): >>45308710 #>>45311137 #>>45311718 #
1. johnisgood ◴[] No.45311137{3}[source]
> But hey, I don’t think it’s a problem! I write a lot of code using LLMs, mostly for learning (and… ejem, some of it might end up in production) and I’ve always found them great tools for learning (supposing you use the appropriate context engineering and make sure the agent has access to updated docs and all of that). For example, I wanted to learn Rust so I half-vibed a GPUI-based chat client for LLMs that works just fine and surprisingly enough, I actually learned and even had some fun.

As I wrote in another comment, it is not inherently a bad thing. I use LLMs too (while knowing what I am doing), and if the project works, why not?