←back to thread

881 points embedding-shape | 5 comments | | HN request time: 0.492s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
gortok ◴[] No.46206694[source]
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
1. crazygringo ◴[] No.46209416[source]
I actually disagree, in certain cases. Just today I saw:

https://news.ycombinator.com/item?id=46204895

when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.

When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.

If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.

replies(2): >>46209828 #>>46210076 #
2. zacmps ◴[] No.46209828[source]
LLM summaries of papers often make overly broad claims [1].

I don't think this is a good example personally.

[1] https://arxiv.org/abs/2504.00025

replies(1): >>46210325 #
3. Rarebox ◴[] No.46210076[source]
That's a pretty good example. The summary is actually useful, yet it still annoys me.

But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.

One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.

replies(1): >>46210463 #
4. crazygringo ◴[] No.46210325[source]
When there's nothing else to go on, it's still more useful than nothing.

The story was being upvoted and on the front page, but with no substantive comments, clearly because nobody understood what the significance of the paper was supposed to be.

I mean, HN comments are wrong all the time too. But if an LLM summary can at least start the conversation, I'm not really worried if its summary isn't 100% faithful.

5. crazygringo ◴[] No.46210463[source]
I definitely read the comments to learn. I love when there's a post about something I didn't know about, and I love when HN'ers can explain details that the post left confusing.

If I'm looking for entertainment, HN is not exactly my first stop... :P