I've found that LLMs are particularly bad at writing Zig because the language evolves quickly, so LLMs that are trained on Zig code from two years ago will write code that no longer compiles on modern Zig.
There seems to be a fair amount of stigma around using llms. And many people that use them are uncomfortable talking about it.
It's a weird world. Depending on who is at the wheel, whether an llm is used _can_ make no difference.
But the problem is, you can have no idea what you're doing and make something that feels like it was carefully hand-crafted by someone - a really great project - but there are hidden things or outright lies about functionality, often to the surprise of the author. Like, they weren't trying to mislead, just didn't take them time to see if it did all of what the LLM said it did.
I think it is not inherently a bad thing to use LLMs, only if you have absolutely no clue about what you are doing, but even then, if the project is usable and as-is advertised, why not? shrugs
As for the link, that is exactly the same code that caught my eye, besides the README.md itself. The LRU eviction thing is what GPT (and possibly other LLMs) always comes up with according to my experiences, and he could have just had it properly implemented then. :D
Edit: I am glad author confirmed the use of an LLM. :P