←back to thread

882 points embedding-shape | 4 comments | | HN request time: 0.841s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. josefresco ◴[] No.46206795[source]
As a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.

We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.

replies(1): >>46206956 #
2. superfishy ◴[] No.46206956[source]
I agree. The alternative is prohibiting this practice and having these posters not disclose their use of LLMs, which in many cases cannot really be easily detected.
replies(1): >>46207960 #
3. TulliusCicero ◴[] No.46207960[source]
No, most don't think they're doing anything wrong, they think they're actually being helpful. So, most wouldn't try to disguise it, they'd just stop doing it, if it was against the rules.
replies(1): >>46210279 #
4. abustamam ◴[] No.46210279{3}[source]
Agreed with them not thinking they're doing anything wrong. Disagree with them not wanting to disguise it. If they don't think they're doing anything wrong, then they likely don't think it's against the rules. If they knew it were against the rules, they'd probably disguise it better.

This may actually be a good thing because it'd force them to put some thought into dissecting the comment from AI instead of just pasting it in wholesale. Depending on how well they try to disguise it, of course.