←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.302s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
flkiwi ◴[] No.46208295[source]
I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.

replies(9): >>46208592 #>>46208859 #>>46209151 #>>46209987 #>>46210530 #>>46210557 #>>46210638 #>>46210955 #>>46211367 #
1. ozgung ◴[] No.46210955[source]
I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

> 1. If I wanted to run a web search, I would have done so

Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

2. People behave as if they believe AI results are authoritative, which they are not

AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

Any strict rule/ban would be very premature and shortsighted at this point.