←back to thread

882 points embedding-shape | 3 comments | | HN request time: 0.486s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
michaelcampbell ◴[] No.46206776[source]
Related: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.
replies(7): >>46206902 #>>46206906 #>>46206999 #>>46207044 #>>46208117 #>>46208137 #>>46208444 #
1. whimsicalism ◴[] No.46206902[source]
Disagree, find these comments valuable - especially if they are about an article that I was about to read. It's not the same as sockpuppeting accusations, which I think are right to be banned.
replies(2): >>46208940 #>>46209226 #
2. duskwuff ◴[] No.46208940[source]
Yes. Especially on articles - the baseline assumption is that most articles are written by humans, and it's nice to know when that expectation may have been violated.
3. sfink ◴[] No.46209226[source]
Yeah, I haven't used AIs enough to be that good at immediately spotting generated output, so I appreciate the chance to reconsider my assumption that something was human-written. I'm sure people who did NOT use an AI find it insulting to be so accused, but I'd rather normalize those accusations and shift the norm to see them as suspicions rather than accusations.

I do find it more helpful when people specify why they think something was AI-generated. Especially since people are often wrong (fwict).