←back to thread

881 points embedding-shape | 6 comments | | HN request time: 0s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. chemotaxis ◴[] No.46206685[source]
This wouldn't ban the behavior, just the disclosure of it.
replies(3): >>46206767 #>>46206853 #>>46209276 #
2. xivzgrev ◴[] No.46206767[source]
Agreed - in fact these folks are going out of their way to be transparent about it. It's much easier to just take credit for a "smart" answer
replies(1): >>46208116 #
3. AlwaysRock ◴[] No.46206853[source]
I guess... That is the point in my opinion.

If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"

But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.

This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.

replies(1): >>46212112 #
4. muwtyhg ◴[] No.46208116[source]
So those folks must be doing it because they think it's helpful, right? They are explicitly trying not to take credit for the words. Do you think, after a ban of these kinds of posts are implemented, that those posters would start hiding their use of AI to create replies, or would they just stop using AI to reply at all?
5. sfink ◴[] No.46209276[source]
That was my immediate thought too, but I'm still in favor of banning it in order to make it a community norm. Right now, people generally seem to think that such comments are adding some sort of signal, and I don't think they're stupid to think that. Not stupid, just wrong. And people feel personally attacked and so get defensive and harden their position, so it would be better to just make it against the guidelines with some justification there rather than trying to control it with individual arguments (with a defensive person!) or downvoting alone. (And the guidelines would be the place to put the explanation of why it's disallowed.)

People will still do it, but now they're doing it intentionally in a context where they know it's against the guidelines, which is a whole different situation. Staying up late to argue the point (and thus add noise) is obviously not going to work.

I'd prefer the guideline to allow machine translation, though, even when done with a chatbot. If you are using a chatbot intentionally with the purpose of translating your thoughts, that's a very different comment than spewing out the output from a prompt about the topic. There's some gray area where they fuzz together, but in my experience they're still very different. (Even though the translated ones set off all the alarm bells in terms of style, formatting, and phrasing.)

6. Kim_Bruning ◴[] No.46212112[source]
Taking ownership isn't the worst instinct, to be fair. But that's a slightly different formulation.

"People are responsible for the comments that they post no matter how they wrote them. If you use tools (AI or otherwise) to help you make a comment, that responsibility does not go away"