←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
flkiwi ◴[] No.46208295[source]
I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.

replies(9): >>46208592 #>>46208859 #>>46209151 #>>46209987 #>>46210530 #>>46210557 #>>46210638 #>>46210955 #>>46211367 #
9rx ◴[] No.46210530[source]
> people are demonstrating a new behavior that is disrupting social norms

The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.

replies(2): >>46210761 #>>46215789 #
sapphicsnail ◴[] No.46210761[source]
The issue isn't people posting AI generated comments on the Internet as a whole, it's whether it should be allowed in this space. Part of the reason I come to HN is the quality of comments are pretty good relative to other places online. I think it's a legitimate question whether AI comments would help or hinder discussion here.
replies(2): >>46210795 #>>46212878 #
1. 9rx ◴[] No.46210795[source]
That's a pretty good sign that the HN user base as a rule finds most enjoyment in writing high quality content for themselves. All questions are legitimate, but in this circumstance what reason is there to believe that they would find even more enjoyment from reducing the quality?

It seems a lot like code. You can "vibe code" your way into an ungodly mess, but those who used to enjoy the craft of writing high quality code before LLMs arrived still seem to insist on high quality code even if an LLM is helping produce it now. It is highly likely that internet comments are no different. Those who value quality will continue to. Those who want garbage will produce it, AI or not.

Much more likely is seeing the user base shift over time towards users that don't care about quality. Many a forum have seen that happen long before LLMs were a thing, and it is likely to happen to forums again in the future. But, the comments aren't written for you (except your own, of course) anyway, so... It is not rational to want to control what others are writing for themselves. But you can be responsible for writing for yourself what you want to see!