←back to thread

882 points embedding-shape | 7 comments | | HN request time: 0.586s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. gruez ◴[] No.46206731[source]
What do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?
replies(4): >>46206773 #>>46207146 #>>46208412 #>>46208810 #
2. everdrive ◴[] No.46206773[source]
It depends on if you're saying "Infowars has the answer, check out this article" vs "I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."
replies(1): >>46207806 #
3. newsoftheday ◴[] No.46207146[source]
Your point is conflating a potential low quality source with AI output while also making the judgement that <fill in the blank site> is a low quality source and to be disregarded 100% of the time; ignoring that the potential exists that an informative POV may be present, even on a potential low quality source site.
4. gruez ◴[] No.46207806[source]
>I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."

You can make the same argument for AI output as well, but to be clear, I'm referring to the case of someone bringing up a low quality source as the answer.

replies(1): >>46208076 #
5. everdrive ◴[] No.46208076{3}[source]
Definitely agreed, I think the exact same would apply -- if there's an insightful conversation to be had about LLMs or their responses, then I think we'd all welcome it. If it's just someone saying "I asked the LLM and it said X" then we're better off without it.

Not sure how easy that would actually be to moderate, of course.

6. Aachen ◴[] No.46208412[source]
If you plagiarise text from a source that is objectively (measurably, systematically) unreliable, without vetting, adding commentary, or doing anything else to add value, then 100% yes that's the same issue
7. sebastiennight ◴[] No.46208810[source]
Have you seen this happen in the wild, ever?

I have not encountered a single instance of this ever since I've started using HN (and can't find one using the site search either) whereas the "I asked ChatGPT" zombie answers are rampant.