Where are the grownups in the room?
Where are the grownups in the room?
It sort of reminds me of those marketing sites I used to see selling a product, where it's a bunch of short paragraphs and one-liners, again difficult to articulate but those were ubiquitous like 5 years ago and I can see where AI would have learned it from.
It's also tough because if you're a good writer you can spot it easier and you can edit LLM output to hide it, but then you probably aren't leaning on LLM's to write for you anyways. But if you aren't a good writer or your English isn't strong you won't pick up on it, and even if you use the AI to just rework your own writing or generate fragments it still leaks through.
Now that I think about it I'm curious if this phenomenon exists in other languages besides English...
Honestly with the way the world is going, you might as well just ask AI to generate the chat logs from the article. Who cares if it's remotely accurate, doesn't seem like anyone cares when it comes to anything else anyways.
could be summed up as "and not a single bit of productivity was had that day"
Meanwhile nothing actually changed and the result is pretty much the same anyways.
The main point I'd like to raise in this comment though is that one of us is wrong - maybe me or you - and our internal LLM radar / vibe check is not as strong as we think. That worries me a bit. Probably LLM accusations are now becoming akin to the classic "You're a corporate shill!".
I'm beginning to wonder how many of the "This was written by AI!" comments are AI-generated.
Someone linked this article you wrote from 7 years ago.
https://www.sanity.io/blog/getting-started-with-sanity-as-a-...
It's well written and obviously human made. Curious what you think as to the differences.
But actually, I think I shouldn't have needed to identify any signs. It's the people claiming something's the work of an LLM based on little more than gut feelings, that should be asked to provide more substance. The length of sentences? Number of bullet points? That's really thin.
However there is evidence that writers who have experience using LLMs are highly accurate at detecting AI generated text.
> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts’ free-form explanations shows that while they rely heavily on specific lexical clues, they also pick up on more complex phenomena within the text that are challenging to assess for automatic detectors. [0]
Like the paper says, it's easy to point to specific clues in ai generated text, like the overuse of em dashes, overuse of inline lists, unusual emoji usage, tile case, frequent use of specific vocab, the rule of three, negative parallelisms, elegant variation, false ranges etc. But harder to articulate and perhaps more important to recognition is overall flow, sentence structure and length, and various stylistic choices that scream AI.
Also worth noting that the author never actually stated that they did not use generative AI for this article. Saying that their hands were on the keyboard or that they reworked sentences and got feedback from coworkers doesn't mean AI wasn't used. That they haven't straight up said "No AI was used to write this article" is another indication.
I expect that they did in some small way, especially considering the source.
But not to an extent where it was anywhere near as relevant as the actual points being made. "Please don't complain about tangential annoyances,", the guidelines say.
I don't mind at all that it's pointed out when an article is nothing more than AI ponderings. Sure, call out AI fluff, and in particular, call out an article that might contain incorrect confabulated information. This just wasn't that.