←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0.199s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. TheAceOfHearts ◴[] No.46211858[source]
I think you shouldn't launder LLM output as your own, but in AI model discussion and new release threads it can be useful to highlight examples of outputs from LLMs. The framing and usage is a key element: I'm interested in what kinds of things people are trying. Using LLM output as a substitute for engagement isn't interesting, but combining a bunch of responses to highlight differences between models could be interesting.

I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.