←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0.222s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. snayan ◴[] No.46208376[source]
I would say it depends, from your examples:

1) borderline. Potentially provides some benefit to the thread for readers who also don't have time or expertise to read an 83 page paper. Although it would require someone to acknowledge and agree that the summary is sound.

2) Acceptable. Dude got grok to make some cool visuals that otherwise wouldn't exist. I don't see what the issue is with something like this.

3) borderline. Same as 1 mostly.

The more I think about this, the less bothered I am by it. If the problem were someone jumping into a conversation they know nothing about, and giving an opinion that is actually just the output of an LLM, I'd agree. But all the examples you provided are transformative in some way. Either summarizing and simplifying a long article or paper, or creating art.