←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0.275s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. maerF0x0 ◴[] No.46209119[source]
I see it equivalently helpful to the folks who paste archive.is/ph links for paywalled content. It saves me time to do something I may have wanted to do regardless, and it's easy enough to fold if someone does post a wall of response.

IMO hiding such content is the job of an extension.

When I do "here's what chatgpt has to say" it's usually because I'm pretty confident of a thing, but I have no idea what the original source was, but I'm not going to invest much time in resurrecting the original trail back to where I first learned a thing. I'm not going to spend 60 minutes to properly source a HN comment, it's just not the level of discussion I'm willing to have though many of the community seem to require an academic level of investment.