I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Not saying the article is bad, it seems pretty good. Just that there are indications
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
The post just repeats things over and over again, like the Brett Farmer thing, the "four months", telling us three times that they knew "my BTC balance and SSN" and repeatedly mentioning that it was a Google Voice number.
The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.
But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…
The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:
“The Timeline That Doesn't Make Sense
Here's where the story gets interesting—and troubling:
[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”
Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?
I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.
Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.
Of course, unlike those people, LLMs are capable of expressing novel ideas that add meaningful value to diverse conversations beyond loudly and incessantly ensuring everyone in the thread is aware of their objection to new technology they dislike.
It's the task of anybody presenting their output to third parties to read (at least without a disclaimer about a given text being unvetted LLM output) to make damn sure it's the former and not the latter.
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
The article isn't paywalled. Nobody was forced to read it. Nobody was prohibited from asking an LLM to summarize the article.
Whining about LLM written text is whining about one's own deliberate choice to read an article. There is no implied contract or duty between the author and the people who freely choose to read or not read the author's (free) publication.
It's like walking into a (free) soup kitchen, consuming an entire bowl of free soup, and then whining loudly to everyone else in the room about the soup being too salty.
We're probably reading LLM-assisted or even generated texts many times per day at this point, and as long as I don't notice that my time is being wasted by bad writing or hallucinated falsehoods, I'm perfectly fine with it.
There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.
Please, at least put a disclaimer on top so I can ask an AI to summarize the article and complete the cycle of entropy.
And so at this point the excessive bullet points and similar filler trash is also just an expression of whatever stupid people think they prefer.
Maybe I'm being too harsh and it's not the raters are stupid in this constellation, rather it's the ones thinking you could improve the LLM by asking them to make a few very thin judgements.
Well if that's how we identify humans I for one prefer our new LLM overlords.
A lot of people who say stuff like "boo AI!" are not only setting the bar for humanity very low, they're also discouraging intellectualism and intelligent discourse online. Honestly, if a LLM wrote a good think piece, I prefer that over "human slop".
I just wish people would critique a text on its own merits instead of inventing strawman arguments about how it was written.
Oh and, for the provocative effect — I'll end my comment with an em dash.
Generating thousands of words because it's easy is exactly the problem with AI generated content. The people generating AI content think about quantity not quality. If you have to type out the words yourself, if you have to invest the time and energy into writing the post, then you're showing respect for your readers by making the same investment you're asking them to make... and you are creating a natural constraint on the verbosity because you are spending your valuable time.
Just because you can generate 20 hours of output in 30 minutes, doesn't mean you should. I don't really care about whether or not you use AI on principle, if you can generate great content with AI, go for it, but your post is classic AI slop, it's a verbose nightmare, it's words for the sake of words, it's from the quantity over quality school of slop.
> I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.
Posting nothing is better than posting slop, but you're presenting a false dichotomy. You could have spent the 30 minutes writing the post yourself and posted 30 minutes of output. Or, if you absolutely must use ChatGPT to generate blog posts, ask it to produce something that is a few hundred words at most. Remember the famous quote...
"If I had more time, I would have written a shorter letter."
If ChatGPT can do hundreds of hours of work for you then it should be able to produce the shortest possible blog post, it should be able to produce 100 words that say what you could in 3,000. Not the other way around!
> Over‑polished prose – flawless grammar, overly formal tone, and excessive wordiness.
> Repetitive buzzwords – phrases like “delve into,” “navigate,” “vibrant,” “comprehensive,” etc.
> Lack of perspective shifts – AI usually sticks to a single narrative voice; humans naturally mix first, second, and third person.
> Excessive em‑dashes – AI tends to over‑use them, breaking flow.
> Anodyne, neutral stance – AI avoids strong opinions, trying to please every reader.
> Human writing often contains minor errors, idiosyncratic punctuation, and a more nuanced, opinionated voice.
> It's not just x, it's y
Overuse of "Here's..." to introduce or further every concept or idea.
A few parts of this article particularly jump out, such as the 2 lists following the "The SMS Flooding Attack" section (which incidentally begins "Here's where..."). A human wouldn't write them as lists (the first list in particular), they'd be normal paragraphs. Short bulleted lists are a good way to get across simple bite-sized pieces of information quickly, but that's in cases where people aren't going to read a large block of text, e.g. in ads. Overusing them in the wrong medium, breaking up a piece of prose like this, just hurts its flow and readability.
But we're on a site about sharing content for intellectual discussion, right? So when people keep posting the same garbage without labeling it, and you figure it halfway though the article, it's frustrating to find out you wasted your time.
To use your soup analogy: imagine this was a website to share restaurants. You see a cool new Korean place upvoted, so you stop by there for lunch sometime. You sit down, you order, and then ten minutes later, Al comes out with his trademark thin, watery soup again.
In that scenario, it's entirely reasonable to leave a comment, "Ugh, don't bother with this place, it's just Al and his shitty soup again."
You're whining about someone else performing an act of charity. If you want better soup, go to a restaurant, not a soup kitchen. The soup kitchen doesn't care about your complaints. The target audience obviously isn't food critics, it's people who meaningfully benefit from free soup.
For a food critic to walk into the soup kitchen and complain is an egocentric act by the food critic that mistakenly assumes the whole world revolves around them. The complaint itself is bizarrely egotistical, entitled, and offputting.
You're staking personal reputation in the output of something you can expect to be wrong. When someone gets a suspicious email, they follow your advice, and ChatGPT incorrectly assures them that it's fine, then the scammed person would be correct in thinking you're a person with bad advice.
And if you don't believe my arguments, maybe just ask ChatGPT to generate a persuasive argument against using ChatGPT to identify scam emails.