←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0.203s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. Kim_Bruning ◴[] No.46209754[source]
But what [if the llms generate] constructive and helpful comments? https://xkcd.com/810/

For obvious(?) reasons I won't point to some recent comments that I suspect, but they were kind and gentle in the way that Opus 4.5 can be at times; encouraging humans to be good with each other.

I think the rules should be similar to bot rules I saw on wikipedia. It ought to be ok to USE an AI in the process of making a comment, but the comment needs to be 'owned' by the human/the account posting it.

Eg. if it's a helpful comment, it should be upvoted. If it's not helpful, downvoted; and with a little luck people will be encouraged/discouraged from using AI in inappropriate ways.

"I asked gemini, and gemini said..." is probably the wrong format, if it's otherwise (un)useful, just vote it accordingly?