I am just impressed by the quality and details and approach of it all.
Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)
I am just impressed by the quality and details and approach of it all.
Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)
Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.
Quality prose usually only becomes that after many reviews.
But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.
But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?
Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.