←back to thread

882 points embedding-shape | 2 comments | | HN request time: 0.396s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
sans_souse ◴[] No.46206852[source]
There be a thing called Thee Undocumented Rules of HN, aka etiquette, in which states - and I quote: "Thou shall not post AI generated replies"

I can't locate them, but I'm sure they exist...

replies(1): >>46206966 #
1. tastyfreeze ◴[] No.46206966[source]
I've seen that document. It also has a rule that states "Thou shall not be a bot."

Unfortunately, I can't find them. Its a shame. Everybody should read them.

replies(1): >>46208118 #
2. warkdarrior ◴[] No.46208118[source]
It's a great doc, I've been training my HN bot on it.