←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.264s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. jdoliner ◴[] No.46207156[source]
I've always liked that HN typically has comments that are small bits of research relevant to the post that I could have done myself but don't have to because someone else did it for me. In a sense the "I asked $AI, and it said" comments are just the evolved form of that. However the presentation does matter a little, at least to me. Explicitly stating that you asked AI feels a little like an appeal to authority... and a bad one at that. And makes the comment feel low effort. Often times comments that frame themselves in this way will be missing the "last-mile" effort that tailors the LLMs response to the context of the post.

So I think maybe the guidelines should say something like:

HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.

---------

Also I asked ChatGPT and it said:

Short Answer

HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.

A rule probably isn’t needed. A norm is.