←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.196s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
chemotaxis ◴[] No.46206685[source]
This wouldn't ban the behavior, just the disclosure of it.
replies(3): >>46206767 #>>46206853 #>>46209276 #
1. sfink ◴[] No.46209276[source]
That was my immediate thought too, but I'm still in favor of banning it in order to make it a community norm. Right now, people generally seem to think that such comments are adding some sort of signal, and I don't think they're stupid to think that. Not stupid, just wrong. And people feel personally attacked and so get defensive and harden their position, so it would be better to just make it against the guidelines with some justification there rather than trying to control it with individual arguments (with a defensive person!) or downvoting alone. (And the guidelines would be the place to put the explanation of why it's disallowed.)

People will still do it, but now they're doing it intentionally in a context where they know it's against the guidelines, which is a whole different situation. Staying up late to argue the point (and thus add noise) is obviously not going to work.

I'd prefer the guideline to allow machine translation, though, even when done with a chatbot. If you are using a chatbot intentionally with the purpose of translating your thoughts, that's a very different comment than spewing out the output from a prompt about the topic. There's some gray area where they fuzz together, but in my experience they're still very different. (Even though the translated ones set off all the alarm bells in terms of style, formatting, and phrasing.)