←back to thread

69 points Jrxing | 2 comments | | HN request time: 0.001s | source
Show context
BergAndCo ◴[] No.45661310[source]
[flagged]
replies(2): >>45664064 #>>45672501 #
Jrxing ◴[] No.45664064[source]
Hi, thanks for digging out who I am. Yes, I am the author of the blog and the project.

We polished the blog for several days. I didn't get how you could conclude that this is AI generated. Is it too good to be human written?

replies(1): >>45664701 #
anonymous908213 ◴[] No.45664701[source]
My impression was that it was likely LLM-written, human-reviewed. Due to a lack of knowledge on the subject/field, I can't comment on the substance of the technical details, which often reveal the shortcomings of LLM blabble, but the writing style certainly comes across as that of an LLM.

Most evidently, the incoherent usage of bold text littered constantly throughout the article, together with the infamous and poorly used em-dash spam. This snippet stood out to me particularly badly, as this does not seem like a case where even one of those odd humans who love em-dashes would use one:

"You might have heard that PagedAttention manages the KV cache using memory pages, which significantly improves memory utilization. That’s true—*but only within a single application.*"

Then you get lines like this one, which combine both random bold text and the em-dash with my most-hated LLMism, "it's not just X, but Y":

"The history of CPU systems shows that *efficiency is not just a hardware problem—it’s also a system design problem.*"

The introductory paragraph also has this (yet again, randomly bolded) LLM sensationalization that a human technical writer would be thoroughly embarrassed to have associated with their writing:

"Behind the $300 billion projected spend on GPU hardware in 2025 lies *a dark truth*: much of this expensive hardware sits *vastly underutilized.*"

Not to mention it's repeated...

"Yet behind the headlines of record spending lies a *quieter story*: much of this expensive hardware sits *vastly underutilized.*"

Your response of "is it too good to be human written" certainly doesn't restore confidence, notwithstanding the lack of humility that would be required to say that about what is allegedly your own writing. LLM writing is visible because it is awful, if you have any comprehension for what good writing looks like. The idea that LLM writing could possibly be "too good" is a truly despairing belief for someone to hold, because it means they themselves have so little understanding of good writing that they think an LLM can output good writing.

I almost wanted to give you a pass for having an LLM write an English article for you, since your response hints that English is not your native language ("I didn't get how you could conclude" is a very ESL-like mistaken tense). But you apparently have a Ph.D. and are working as a professor. I'm not familiar with academic standards these days, but is it really accepted to be claiming LLM output as your own writing...?

replies(1): >>45668609 #
1. JonChesterfield ◴[] No.45668609[source]
Glossing over "is academia commonly fraudulent" as rather too easy a target, LLMs do tend to write much better than people in languages said people don't really know.

If I wrote the above in Spanish it would be extremely difficult to guess what I'm trying to say. If I ask an llm to translate it, some of the ideas would get across.

replies(1): >>45671243 #
2. anonymous908213 ◴[] No.45671243[source]
Sure, but you'd still expect people to know from seeing LLM outputs in their own language that it's not going to be "too good to be human". An LLM will write a better Chinese essay than I could because I don't speak Chinese very well at all, but it's quite a leap to get from "this writes Chinese essays better than myself, a non-speaker" to "wow, this writes Chinese better than any human!".