Most active commenters
  • charcircuit(4)
  • flkiwi(3)

←back to thread

882 points embedding-shape | 28 comments | | HN request time: 0.992s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. flkiwi ◴[] No.46208295[source]
I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.

replies(9): >>46208592 #>>46208859 #>>46209151 #>>46209987 #>>46210530 #>>46210557 #>>46210638 #>>46210955 #>>46211367 #
2. charcircuit ◴[] No.46208592[source]
>If I wanted to run a web search, I would have done so

While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.

replies(4): >>46208827 #>>46208857 #>>46208914 #>>46209197 #
3. droopyEyelids ◴[] No.46208827[source]
Well put. There are two sides of the coin: the lazy questioner who expects others to do the work researching what they would not, and the lazy/indulgent answerer who basically LMGTFY's it.

Ideally we would require people who ask questions to say what they've researched so far, and where they got stuck. Then low-effort LLM or search engine result pages wouldn't be such a reasonable answer.

replies(1): >>46211300 #
4. MarkusQ ◴[] No.46208857[source]
This begs the question. You are assuming they wanted an LLM generated response, but were to lazy to generate one. Isn't it more likely that the reason they didn't use an LLM is that they didn't want an LLM response, so giving them one is...sort of clueless?

If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?

replies(1): >>46209172 #
5. Terr_ ◴[] No.46208859[source]
Agreed on the similar-but-worse comparison to to the laziest possible web-searches of yesteryear.

To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.

replies(1): >>46209701 #
6. officeplant ◴[] No.46208914[source]
> If they just instead opened up chatgpt they could have instantly gotten their answer.

Great now we've wasted time & material resources for a possibly wrong and hallucinated answer. What part of this is beneficial to anyone?

replies(1): >>46212009 #
7. giancarlostoro ◴[] No.46209151[source]
> 2. People behave as if they believe AI results are authoritative, which they are not

Web search has the same issue. If you don't validate it, you wind up in the same problem.

8. charcircuit ◴[] No.46209172{3}[source]
It's more like someone asks if there are McDonald's in San Francisco, and then someone else searches "mcdonald's san francisco" on Google Maps and then replies with the result. It would have been faster for the person to just type their question elsewhere and get the result back immediately instead of someone else doing it for them.
replies(1): >>46209645 #
9. allenu ◴[] No.46209197[source]
I think a lot of times, people are here just to have a conversation. I wouldn't go so far as to say someone who is pontificating and could have done a web search to verify their thoughts and opinions is being lazy.

This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).

replies(1): >>46209333 #
10. charcircuit ◴[] No.46209333{3}[source]
I've seen this happen too. People will comment and say in the comment that they can't remember something when they could have easily refound that information with chatgpt or google.
11. MarkusQ ◴[] No.46209645{4}[source]
Right. If someone asks "What does ChatGPT think about ...", I'd fully agree that they're being lazy. But if that's _not_ what they ask, we shouldn't assume that that's what they meant.

We should at least consider that maybe they asked how to make French fries because they actually want to learn how to make them themselves. I'll admit the XY problem is real, and people sometimes fail to ask for what they actually want, but we should, as a rule, give them the benefit of the doubt instead of just assuming that we're smarter than them.

replies(1): >>46209956 #
12. flkiwi ◴[] No.46209701[source]
There’s also a whole “gosh golly look at me using the latest fad!” demonstration aspect to this. People status signaling that they’re “in”. Thus the Bluetooth earpiece comment.

It’s clumsy and has the opposite result most of the time, but people still do it for all manner of trends.

13. charcircuit ◴[] No.46209956{5}[source]
Such open ended questions are not the kind of questions I'm referring to.
14. pyrale ◴[] No.46209987[source]
> "I ran a $searchengine search and here is the most relevant result."

Except it's "...and here is the first result it gave me, I didn't bother looking further".

15. 9rx ◴[] No.46210530[source]
> people are demonstrating a new behavior that is disrupting social norms

The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.

replies(2): >>46210761 #>>46215789 #
16. munchbunny ◴[] No.46210557[source]
> 2. People behave as if they believe AI results are authoritative, which they are not

I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.

replies(1): >>46210634 #
17. flkiwi ◴[] No.46210634[source]
This is a big of a gravity vs. acceleration issue, in that the end result is indistinguishable.
18. icoder ◴[] No.46210638[source]
Totally agree if the AI or search results are a (relatively) direct answer to the question.

But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'

replies(1): >>46214328 #
19. sapphicsnail ◴[] No.46210761[source]
The issue isn't people posting AI generated comments on the Internet as a whole, it's whether it should be allowed in this space. Part of the reason I come to HN is the quality of comments are pretty good relative to other places online. I think it's a legitimate question whether AI comments would help or hinder discussion here.
replies(2): >>46210795 #>>46212878 #
20. 9rx ◴[] No.46210795{3}[source]
That's a pretty good sign that the HN user base as a rule finds most enjoyment in writing high quality content for themselves. All questions are legitimate, but in this circumstance what reason is there to believe that they would find even more enjoyment from reducing the quality?

It seems a lot like code. You can "vibe code" your way into an ungodly mess, but those who used to enjoy the craft of writing high quality code before LLMs arrived still seem to insist on high quality code even if an LLM is helping produce it now. It is highly likely that internet comments are no different. Those who value quality will continue to. Those who want garbage will produce it, AI or not.

Much more likely is seeing the user base shift over time towards users that don't care about quality. Many a forum have seen that happen long before LLMs were a thing, and it is likely to happen to forums again in the future. But, the comments aren't written for you (except your own, of course) anyway, so... It is not rational to want to control what others are writing for themselves. But you can be responsible for writing for yourself what you want to see!

21. ozgung ◴[] No.46210955[source]
I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

> 1. If I wanted to run a web search, I would have done so

Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

2. People behave as if they believe AI results are authoritative, which they are not

AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

Any strict rule/ban would be very premature and shortsighted at this point.

22. WorldPeas ◴[] No.46211300{3}[source]
I haven't thought about LMGTFY since stackoverflow. Usually though I see responses with people thrusting forth AI answers that provide more reasoning, back then LMGTFY was more about rote conventions(e.g. "how do you split a string on ," and ai is used more for "what are ways that solar power will change grid dynamics")
23. WorldPeas ◴[] No.46211367[source]
I think it's closer in proximity to the "glasshole" trend, where there social pressure actually worked to make people feel less comfortable about using it publicly. This is an entirely vibes based judgement, but presenting unaltered ai speech within your own feels more imposing and authoritative(as wagging around an potentially-on camera did then). This being the norm on other platforms has degraded my willingness to engage with potentially infinite and meaningless streams of bloviation rather than the (usually) concise and engaging writings of humans
24. Kim_Bruning ◴[] No.46212009{3}[source]
Counterpoint:

Frankly, it's a skill thing.

You know how some people can hardly find the back of their own hands if they googled them?

And then there's people (like eg. experienced wikipedians doing research) who have google-fu and can find accurate information about the weirdest things in the amount of time it takes you to tie your shoes and get your hat on.

Now watch how someone like THAT uses chatgpt (or some better LLM) . It's very different from just prompting with a question. Often it involves delegating search tasks to the LLM (and opening 5 google tabs alongside besides) . And they get really interesting results!

25. edmundsauto ◴[] No.46212878{3}[source]
Would you object to high quality AI comments?
replies(1): >>46214561 #
26. 0manrho ◴[] No.46214328[source]
> ($AI) suggests

Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.

That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.

It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.

27. y0eswddl ◴[] No.46214561{4}[source]
that's an oxymoron
28. terribleperson ◴[] No.46215789[source]
Has it? More than one forum has expected that commentary should contribute to the discussion. Reddit is the most prominent example, where originally upvotes were intended to be used for comments that contributed to the discussion. It's not the first or only example, however.

Sure, the motivation for many people to write comments is to satisfy themselves. The contents of those comments should not be purely self-satisfying, though.