←back to thread

882 points embedding-shape | 4 comments | | HN request time: 0.634s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
skobes ◴[] No.46206702[source]
I hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.
replies(3): >>46206837 #>>46206974 #>>46209130 #
1. llm_nerd ◴[] No.46206974[source]
I think people are just presuming that others are regurgitating AI pablum regardless.

People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.

And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.

I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.

replies(1): >>46207226 #
2. nottorp ◴[] No.46207226[source]
Thing is, the comments that sound "AI" generated but aren't have about as much value as the ones that really are.

Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.

But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.

replies(1): >>46208488 #
3. llm_nerd ◴[] No.46208488[source]
>the comments that sound "AI" generated but aren't have about as much value as the ones that really are

When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.

People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.

replies(1): >>46210048 #
4. nottorp ◴[] No.46210048{3}[source]
I mean verbose for no good reasons, not contributing meaningfully to the discussion in any way.

Just like those StackOverflow answers - before "AI" - that came in 30 seconds on any question and just regurgitated in a "helpful" sounding way whatever tutorial the poster could find first that looked even remotely related to the question.

"Content" where the target is to trick someone into an upvote instead of actually caring about the discussion.