←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0.203s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
JohnFen ◴[] No.46206671[source]
I find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.

But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.

replies(1): >>46206696 #
Scene_Cast2 ◴[] No.46206696[source]
There is friction to asking AI yourself. And a comment typically means that "I found the AI answer insightful enough to share".
replies(4): >>46206774 #>>46206804 #>>46206981 #>>46207036 #
1. codechicago277 ◴[] No.46206774[source]
The problem is that the AI answer could just be wrong, and there’s another step required to validate what it spit out. Sharing the conversation without fact checking it just adds noise.