←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0.203s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. mistrial9 ◴[] No.46206942[source]
the system of long-lived nicks on YNews is intended to build a mild and flexible reputation system. This is valuable for complex topics, and to notice zealots, among other things. The feeling while reading that it is a community of peers is important.

AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.

This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.