Most active commenters
  • johnisgood(4)
  • adastra22(3)

←back to thread

135 points barddoo | 21 comments | | HN request time: 0.001s | source | bottom

Writing Redis from scratch in Zig.
Show context
johnisgood ◴[] No.45308123[source]
Seems like LLMs are getting good at Zig (with some help, I presume).
replies(2): >>45308193 #>>45308553 #
1. mtlynch ◴[] No.45308193[source]
Is there anything about this project that seems LLM-generated?

I've found that LLMs are particularly bad at writing Zig because the language evolves quickly, so LLMs that are trained on Zig code from two years ago will write code that no longer compiles on modern Zig.

replies(4): >>45308296 #>>45308429 #>>45308798 #>>45311161 #
2. 5- ◴[] No.45308296[source]
https://github.com/barddoo/zedis/blob/87321b04224b2e2e857b67...

these seem to occur only in college assignment projects, and in the output of text generators trained on those.

replies(2): >>45308389 #>>45308685 #
3. WD-42 ◴[] No.45308389[source]
I will never place emojis in any of my readmes ever again.
replies(2): >>45308404 #>>45308424 #
4. chucky_z ◴[] No.45308404{3}[source]
spell out 'development' with hammer emojis. bring ascii art back as emoji art.

(i actually do this in slack messages and folks find it funny and annoying, but more funny)

5. tayo42 ◴[] No.45308424{3}[source]
People were doing this before llms, otherwise how did they learn it?
replies(2): >>45308486 #>>45308701 #
6. rmonvfer ◴[] No.45308429[source]
As an avid Claude Code user, I can tell you with 99% probability, that README is LLM-generated. This exactly the same structure and wording used by Claude (of course, it has some human modification because otherwise I’d filled with emojis)

In my experience, when you work with something like agentic development tools, you describe your goals and give it some constraints like “use modern zig”, “always run tests”… and when you ask it to write a README, it will usually reproduce those constraints more or less verbatim.

The same thing happens with the features section, it reads like instructions for an LLM.

I might be wrong but I spend way too much time using Claude, Gemini, Codex… and IMHO it’s pretty obvious.

But hey, I don’t think it’s a problem! I write a lot of code using LLMs, mostly for learning (and… ejem, some of it might end up in production) and I’ve always found them great tools for learning (supposing you use the appropriate context engineering and make sure the agent has access to updated docs and all of that). For example, I wanted to learn Rust so I half-vibed a GPUI-based chat client for LLMs that works just fine and surprisingly enough, I actually learned and even had some fun.

replies(3): >>45308710 #>>45311137 #>>45311718 #
7. WD-42 ◴[] No.45308486{4}[source]
Sure, but llms absolutely love to do it for some reason.
replies(1): >>45308842 #
8. nine_k ◴[] No.45308685[source]
Has been doing this for years, even before LLMs were a thing. No, not in college assignments; by the time emoji appeared, I had long since walked out of my PhD program and went to the industry.

I put such emojis at the beginning of big headings, because my eyes detect compact shapes and colors faster than entire words and sentences. This helps me (and hopefully others) locate the right section easier.

In Slack, I put large emojis at the beginning of messages that need to stand out. These are few, and emojis work well in this capacity.

(Disclaimer: I may contain a large language model of some kind, but very definitely I cannot be reduced to it in any area of my activity.)

replies(2): >>45308699 #>>45310502 #
9. adastra22 ◴[] No.45308699{3}[source]
FWIW it is really confusing to me and others. What is this emoji supposed to mean? Heck if I know.

But the telltale signs are far more than just that. The whole document is exactly the kind of README produced by Claude.

10. adastra22 ◴[] No.45308701{4}[source]
That's why he said "never again"
11. adastra22 ◴[] No.45308710[source]
I don't know why you're being downvoted. This follows the LLM-generated-README template perfectly. And yeah, it usually ends up being a dumping ground for the constraints you gave it, almost verbatim.
12. jasonjmcghee ◴[] No.45308798[source]
I skimmed, for me it was this: https://github.com/barddoo/zedis/blob/87321b04224b2e2e857b67...

There seems to be a fair amount of stigma around using llms. And many people that use them are uncomfortable talking about it.

It's a weird world. Depending on who is at the wheel, whether an llm is used _can_ make no difference.

But the problem is, you can have no idea what you're doing and make something that feels like it was carefully hand-crafted by someone - a really great project - but there are hidden things or outright lies about functionality, often to the surprise of the author. Like, they weren't trying to mislead, just didn't take them time to see if it did all of what the LLM said it did.

replies(3): >>45308978 #>>45310151 #>>45311116 #
13. tayo42 ◴[] No.45308842{5}[source]
I just took a look at a Readme I had cursor write a couple months ago and there's no emojis
14. boredemployee ◴[] No.45308978[source]
3 months ago I was vibe coding an idea and for some reason (and luck) I went to check a less important part of the code and saw that the LLM changed the env variable of an API key and hard coded the key explictly in the code. That was scary. I'm glad I saw it before PR and shit like that.
15. barddoo ◴[] No.45310151[source]
Agree. I used it mostly for getting ideas, the memory management for example, Gemini listed so many different ways of managing memory I didn’t even know existed. I know I wanted to pre allocate memory like tigerbeetle does, so the hybrid approach was perfect. Essentially it has 3 different allocators, a huge one for the cache, a arena allocator for context, intermediate state like pub/sub and temp one, for requests. It was 100% Gemini’s idea.
16. MangoToupe ◴[] No.45310502{3}[source]
I had assumed they were referring to stuff like "Type-safe operations with compile-time guarantees". What a weird detail to add to a readme. And the whole section is like that. I wonder if that's part of a prompt leaking through.
17. johnisgood ◴[] No.45311116[source]
I generally do not think it is a bad thing. I use LLMs too and I know what I am doing, so I do not know if it could be qualified as vibe coding.

I think it is not inherently a bad thing to use LLMs, only if you have absolutely no clue about what you are doing, but even then, if the project is usable and as-is advertised, why not? shrugs

As for the link, that is exactly the same code that caught my eye, besides the README.md itself. The LRU eviction thing is what GPT (and possibly other LLMs) always comes up with according to my experiences, and he could have just had it properly implemented then. :D

Edit: I am glad author confirmed the use of an LLM. :P

18. johnisgood ◴[] No.45311137[source]
> But hey, I don’t think it’s a problem! I write a lot of code using LLMs, mostly for learning (and… ejem, some of it might end up in production) and I’ve always found them great tools for learning (supposing you use the appropriate context engineering and make sure the agent has access to updated docs and all of that). For example, I wanted to learn Rust so I half-vibed a GPUI-based chat client for LLMs that works just fine and surprisingly enough, I actually learned and even had some fun.

As I wrote in another comment, it is not inherently a bad thing. I use LLMs too (while knowing what I am doing), and if the project works, why not?

19. johnisgood ◴[] No.45311161[source]
Fair enough, but I have made a couple of projects in Odin using Claude, and Odin is evolving too, and it is much more obscure than Zig. I made these projects successfully by feeding the LLM the documentation of Odin and gave it example projects... thus, if I could create Odin projects (with some reiterations and hand-holding, true), then Zig should be even better.
20. samiv ◴[] No.45311718[source]
But wait.. if the chatbot is writing then code then it is not you who is writing the code, but the chatbot.
replies(1): >>45312089 #
21. hnlmorg ◴[] No.45312089{3}[source]
That would be true if you were vibe coding. However “normal” agentic development is a little more like “pair programming” in my opinion.