←back to thread

881 points embedding-shape | 4 comments | | HN request time: 0s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
michaelcampbell ◴[] No.46206776[source]
Related: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.
replies(7): >>46206902 #>>46206906 #>>46206999 #>>46207044 #>>46208117 #>>46208137 #>>46208444 #
1. yodon ◴[] No.46207044[source]
> Comments saying "this feels like AI" should be banned.

Strong agree.

If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.

If you can't, accept that you can't, and stop wasting everyone else's time with your unvalidated guesses about whether something is AI or not.

The least valuable lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.

It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.

replies(2): >>46207829 #>>46209231 #
2. D13Fd ◴[] No.46207829[source]
I strongly disagree. I like the social pressure against people posting comments that feel like AI (e.g., that add a lot of text and little non-BS substance). I also like the reminder to view suspicious comments and media through that lens.
replies(1): >>46208177 #
3. ◴[] No.46208177[source]
4. notahacker ◴[] No.46209231[source]
"Does anyone else hate those s̶c̶r̶o̶l̶l̶b̶a̶r̶s̶ ads/modals/unconventional page layout" is the archetypical HN response tbf, and often the most upvoted

Also, I'm pretty sure most people can spot blogspam full of glaringly obvious cliche AI patterns without being able to create a high reliability AI detector. To set that as the threshold for commentary on whether an article might have been generated is akin to arguing that people shouldn't question the accuracy of a claim unless they've built an oracle or cracked lie detection.