←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.994s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. pyuser583 ◴[] No.46213380[source]
I recently asked an AI about a very important topic in current events. It gave me a shocking answer, which initially assumed was wrong - but seems correct.

The question was something like: “how reliable is the science behind misinformation.” And it said something like: “quality level is very poor and far below what justifies current public discourse.”

I ask for a specific article backing this up, and it’s saying “there isn’t any one article, I just analyzed the existing literature and it stinks.”

This matters quite a bit. X - formerly Twitter - is being fined for refusing to make its data available for misinformation research.

I’m trying to get it to give me a non-AI source, but it’s saying it doesn’t exist.

If this is true - it’s pretty important- and something worth discussing. But it doesn’t seem supportable outside the context of “my AI said.”