←back to thread

881 points embedding-shape | 2 comments | | HN request time: 0.387s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. PeterStuer ◴[] No.46206875[source]
For better or worse, that ship has sailed. LLM's are now as omnipresent as websearch.

Some people will know how to use it in good taste, others will try to abuse it in bad taste.

It might not be universally agreed which is which in every case.

replies(1): >>46208159 #
2. collinmcnulty ◴[] No.46208159[source]
I think the ship very much has not sailed on how different spaces treat LLM responses. Are LLMs something that you can use if you want, but are considered rude and banned from being blatantly posted without human ownership? “You can’t use an LLM” would be an impossible rule, but “You can use an LLM to write your response but you have to take responsibility for the output” is feasible.