While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.
And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.
I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.
Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.
But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.
When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.
People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.
Just like those StackOverflow answers - before "AI" - that came in 30 seconds on any question and just regurgitated in a "helpful" sounding way whatever tutorial the poster could find first that looked even remotely related to the question.
"Content" where the target is to trick someone into an upvote instead of actually caring about the discussion.