←back to thread

881 points embedding-shape | 5 comments | | HN request time: 0s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. ilc ◴[] No.46206911[source]
No, I put them with lmgtfy. You are being told that your question is easy to research and you didn't do the work, most of the time.

Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.

replies(3): >>46207008 #>>46207206 #>>46207406 #
2. watwut ◴[] No.46207008[source]
1.) They are not replies to people asking questions.

2.) Posting AI response has as much value as posting random reddit comment.

3.) AI has value where you are able to factually verify it. If someone asks a question, they do not know the answer and are unable to validate ai.

3. emaro ◴[] No.46207206[source]
I don't think LLM responses mean a question is easy to research - they will always give an answer.
4. bigstrat2003 ◴[] No.46207406[source]
"I asked AI and it said" is far worse than lmgtfy (which is already rude) because it has zero value as evidence. AI can be right, but it's wrong often enough that you can't actually use it to determine the truth of something.
replies(1): >>46208274 #
5. zepolen ◴[] No.46208274[source]
How is lmgtfy rude?