←back to thread

881 points embedding-shape | 7 comments | | HN request time: 0.191s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
gortok ◴[] No.46206694[source]
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
1. sejje ◴[] No.46206977[source]
This is the only reasonable take.

It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.

Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.

Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.

replies(2): >>46208966 #>>46209016 #
2. fwip ◴[] No.46208966[source]
I'd love to see that for article submissions, as well.
3. manmal ◴[] No.46209016[source]
What LLM generate is an amalgamation of human content they have been trained on. I get that you want what actual humans think, but that’s also basically a weighted amalgamation. Real, actual insight, is incredibly rare and I doubt you see much of it on HN (sorry guys; I’ll live with the downvotes).
replies(2): >>46210008 #>>46210021 #
4. dogleash ◴[] No.46210008[source]
I'm downvoting exclusively for your comment about downvotes.
5. dinkleberg ◴[] No.46210021[source]
Why do you suppose we come to HN if not for actual insight? There are other sites much better for getting an endless stream of weighted amalgamations of human content.
replies(2): >>46210320 #>>46211304 #
6. ergonaught ◴[] No.46210320{3}[source]
Coming here for insight does not in any way demonstrate that genuine insight is actually widely available here.
7. manmal ◴[] No.46211304{3}[source]
It’s obviously an amalgamation that’s weighted in favor of your interests.