Most active commenters
  • samdoesnothing(7)
  • munch117(3)

←back to thread

140 points handfuloflight | 16 comments | | HN request time: 1.609s | source | bottom
1. samdoesnothing ◴[] No.46261425[source]
I'm really getting tired of gen AI and this article is like a perfect microcosm. Partially or at least fully AI generated, discussing a vibe-coded CMS built by an AI startup. It's several layers of marketing and no serious engineering.

Where are the grownups in the room?

replies(4): >>46261442 #>>46261481 #>>46261542 #>>46261754 #
2. Tenemo ◴[] No.46261442[source]
It does read very LLM-y to me, too. The short sentences, dramatic pauses – but maybe I'm oversensitive nowadays, it's really hard to tell at times.
replies(1): >>46261475 #
3. samdoesnothing ◴[] No.46261475[source]
There are some obvious tells like the headings ("Markdown is nice for LLMs. That’s not the point", "What Lee actually built (spoiler: a CMS)"), the dramatic full stops ("\nThis works until it doesn't.\n"), etc. It's difficult to describe because it's sort of a gut feeling you have pattern matching what you get from your own LLM usage.

It sort of reminds me of those marketing sites I used to see selling a product, where it's a bunch of short paragraphs and one-liners, again difficult to articulate but those were ubiquitous like 5 years ago and I can see where AI would have learned it from.

It's also tough because if you're a good writer you can spot it easier and you can edit LLM output to hide it, but then you probably aren't leaning on LLM's to write for you anyways. But if you aren't a good writer or your English isn't strong you won't pick up on it, and even if you use the AI to just rework your own writing or generate fragments it still leaks through.

Now that I think about it I'm curious if this phenomenon exists in other languages besides English...

replies(2): >>46261913 #>>46261973 #
4. CSSer ◴[] No.46261481[source]
If only we could take output and reverse-engineer activation layers through some parameters and get the original prompt. Imagine how much time we could save if we could read the chat transcript or the two actually human-written paragraphs this article was based on. They'd be some banal rant about a DevRel dude but at least it'd be more efficient.
replies(1): >>46261497 #
5. samdoesnothing ◴[] No.46261497[source]
Would be nice but you could probably edit it enough or splice different chat outputs together to break it.

Honestly with the way the world is going, you might as well just ask AI to generate the chat logs from the article. Who cares if it's remotely accurate, doesn't seem like anyone cares when it comes to anything else anyways.

6. PunchyHamster ◴[] No.46261542[source]
As I read it I was just thinking "whoa, someone really just decided to pawn their site design off to AI, then complain it doesn't get CMS, then build CMS purely so they can yell their requests at the AI, and so the company making the CMS pawned off to AI writing article why using AI isn't a great way to click at their CMS"

could be summed up as "and not a single bit of productivity was had that day"

replies(1): >>46261638 #
7. samdoesnothing ◴[] No.46261638[source]
It's like a reflection of Nvidia, Oracle and, OpenAI selling each other products and just trading the same money back and forth. Which is of course a reflection of the classic economist joke about eating poo in the forest. "GDP is up though!"

Meanwhile nothing actually changed and the result is pretty much the same anyways.

8. lmc ◴[] No.46261754[source]
It didn't read as LLM-generated to me. And having some experience with CMS development, the article has plenty of substance. You can check previous blog articles from the same author far predating LLMs - here's one from 2018: https://www.sanity.io/blog/getting-started-with-sanity-as-a-.... The main difference i see with the OP article is it's a bit more emotive - probably a result of responding to a public trashing of their product.

The main point I'd like to raise in this comment though is that one of us is wrong - maybe me or you - and our internal LLM radar / vibe check is not as strong as we think. That worries me a bit. Probably LLM accusations are now becoming akin to the classic "You're a corporate shill!".

replies(1): >>46261792 #
9. samdoesnothing ◴[] No.46261792[source]
Comparing the two articles, they have a completely different style. I wasn't totally convinced the linked article was AI generated but I am now. Clearly the author can write, so I'm a bit saddened that they used an LLM for this article
10. kmelve ◴[] No.46261913{3}[source]
Author here.

I don't know folks... Maybe I have been dabbling so much with AI the last couple of years that I have started taking on its style.

I had my digits on the keyboard for this piece though.

replies(1): >>46262210 #
11. munch117 ◴[] No.46261973{3}[source]
This article is just about as un-AI written as anything I've ever read. The headings are clearly just the outline that he started with. An outline with a clear concept for the story that he's trying to tell.

I'm beginning to wonder how many of the "This was written by AI!" comments are AI-generated.

replies(1): >>46265256 #
12. samdoesnothing ◴[] No.46262210{4}[source]
I'm willing to give you the benefit of the doubt for sure because I can see it's style rubbing off.

Someone linked this article you wrote from 7 years ago.

https://www.sanity.io/blog/getting-started-with-sanity-as-a-...

It's well written and obviously human made. Curious what you think as to the differences.

13. kmelve ◴[] No.46265256{4}[source]
It's strange to see folks here speculate about something you've written.

And if you only knew how much those headings and the structure of this post changed as I wrote it out and got internal feedback on it ^^_

replies(1): >>46267336 #
14. munch117 ◴[] No.46267336{5}[source]
I struggled a bit with what to point to as signs that it's not an LLM conception. Someone else had commented on the headlines as something that was AI-like, and since I could easily imagine a writing process that would lead to headlines like that, that's what I chose. A little too confidently perhaps, sorry.

But actually, I think I shouldn't have needed to identify any signs. It's the people claiming something's the work of an LLM based on little more than gut feelings, that should be asked to provide more substance. The length of sentences? Number of bullet points? That's really thin.

replies(1): >>46268276 #
15. samdoesnothing ◴[] No.46268276{6}[source]
I don't think people should be obligated to spend time and effort justifying their reasoning on this. Firstly it's highly asymmetrical; you can generate AI content with little effort, whereas composing a detailed analysis requires a lot more work. It's also not easily articulatable.

However there is evidence that writers who have experience using LLMs are highly accurate at detecting AI generated text.

> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts’ free-form explanations shows that while they rely heavily on specific lexical clues, they also pick up on more complex phenomena within the text that are challenging to assess for automatic detectors. [0]

Like the paper says, it's easy to point to specific clues in ai generated text, like the overuse of em dashes, overuse of inline lists, unusual emoji usage, tile case, frequent use of specific vocab, the rule of three, negative parallelisms, elegant variation, false ranges etc. But harder to articulate and perhaps more important to recognition is overall flow, sentence structure and length, and various stylistic choices that scream AI.

Also worth noting that the author never actually stated that they did not use generative AI for this article. Saying that their hands were on the keyboard or that they reworked sentences and got feedback from coworkers doesn't mean AI wasn't used. That they haven't straight up said "No AI was used to write this article" is another indication.

0: https://arxiv.org/html/2501.15654v2

replies(1): >>46277729 #
16. munch117 ◴[] No.46277729{7}[source]
> Also worth noting that the author never actually stated that they did not use generative AI for this article.

I expect that they did in some small way, especially considering the source.

But not to an extent where it was anywhere near as relevant as the actual points being made. "Please don't complain about tangential annoyances,", the guidelines say.

I don't mind at all that it's pointed out when an article is nothing more than AI ponderings. Sure, call out AI fluff, and in particular, call out an article that might contain incorrect confabulated information. This just wasn't that.