←back to thread

881 points embedding-shape | 3 comments | | HN request time: 0.453s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
tpxl ◴[] No.46206706[source]
I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.
replies(5): >>46206818 #>>46207071 #>>46207682 #>>46208954 #>>46209066 #
1. mattkrause ◴[] No.46206818[source]
I do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).

I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.

replies(2): >>46206917 #>>46211576 #
2. MBCook ◴[] No.46206917[source]
That provides value as you’re comparing (and hopefully analyzing) output. It’s totally on topic.

In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” adds absolutely nothing to the discussion but noise.

3. autoexec ◴[] No.46211576[source]
It's always fun when people point out an LLMs insane responses to simple questions that shatter the illusion of them having any intelligence, but besides just giving us a good laugh when AI has a meltdown failing to produce a seahorse emoji, there are other times it might be valuable to discuss how they respond, such as when those responses might be dangerous, censored, or clearly being filled with advertising/bias