Most active commenters

    ←back to thread

    882 points embedding-shape | 17 comments | | HN request time: 0.836s | source | bottom

    As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

    While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

    Some examples:

    - https://news.ycombinator.com/item?id=46164360

    - https://news.ycombinator.com/item?id=46200460

    - https://news.ycombinator.com/item?id=46080064

    Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

    What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

    1. michaelcampbell ◴[] No.46206776[source]
    Related: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.
    replies(7): >>46206902 #>>46206906 #>>46206999 #>>46207044 #>>46208117 #>>46208137 #>>46208444 #
    2. whimsicalism ◴[] No.46206902[source]
    Disagree, find these comments valuable - especially if they are about an article that I was about to read. It's not the same as sockpuppeting accusations, which I think are right to be banned.
    replies(2): >>46208940 #>>46209226 #
    3. sbrother ◴[] No.46206906[source]
    Fair, but then that functionality should be built into the flagging system. Obvious AI comments (worse, ones that are commercially driven) are a cancer that's breaking online discussion forums.
    replies(1): >>46208875 #
    4. ruuda ◴[] No.46206999[source]
    I find them helpful. It happens semi-regularly now that I read something that was upvoted, but after a few sentences I think "hmm, something feels off", and after the first two paragraphs I suspect it's AI slop. Then I go to the comments, and it turns out others noticed too. Sometimes I worry that I'm becoming too paranoid in a world where human-written content feels increasingly rare, and it's good to know it's not me going crazy.

    In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.

    5. yodon ◴[] No.46207044[source]
    > Comments saying "this feels like AI" should be banned.

    Strong agree.

    If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.

    If you can't, accept that you can't, and stop wasting everyone else's time with your unvalidated guesses about whether something is AI or not.

    The least valuable lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.

    It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.

    replies(2): >>46207829 #>>46209231 #
    6. D13Fd ◴[] No.46207829[source]
    I strongly disagree. I like the social pressure against people posting comments that feel like AI (e.g., that add a lot of text and little non-BS substance). I also like the reminder to view suspicious comments and media through that lens.
    replies(1): >>46208177 #
    7. djeastm ◴[] No.46208117[source]
    I disagree. Traditional netiquette when downvoting something is to explain why.
    8. 8organicbits ◴[] No.46208137[source]
    One of my recent blog posts got a comment like that, and I tried to reframe it as "this is poorly written", and took the opportunity to solicit constructive criticism and to reflect on my style. I think my latest post improved, and I'm glad I adjusted my style.

    https://news.ycombinator.com/item?id=45652349

    replies(2): >>46208296 #>>46208722 #
    9. ◴[] No.46208177{3}[source]
    10. whimsicalism ◴[] No.46208296[source]
    i think some people get excited by the notion of identifying AI content so start doing so without knowing how. truly nothing about your post reads like an LLM generation, it has a very non-LLM 'voice'
    11. Analemma_ ◴[] No.46208444[source]
    Strong disagree: these comments (if they lay out their case persuasively) allow me to skip the content completely, and save me a lot of time. They provide lots of value, and in fact there should be social rewards for the work of wading through value-free slop to save others from having to do so.
    12. Marsymars ◴[] No.46208722[source]
    Not that you shouldn't self-reflect, but some people's style is going to be similar to the default GPT voice incidentally, and unfortunately for them.

    GPT has ruined my enjoyment of using em dashes, for instance.

    replies(1): >>46209103 #
    13. criddell ◴[] No.46208875[source]
    I think Slashdot still has the best moderating system. Being able to flag a comment as insightful, funny, offtopic, redundant, etc... adds a lot of information and gives more control to readers over the types, quantity, and quality of discussion they see.

    For example, some people seem to be irritated by jokes and being able to ignore +5 funny comments might be something they want.

    14. duskwuff ◴[] No.46208940[source]
    Yes. Especially on articles - the baseline assumption is that most articles are written by humans, and it's nice to know when that expectation may have been violated.
    15. dpifke ◴[] No.46209103{3}[source]
    I recently logged onto LinkedIn for the first time in a while, and found an old job posting from when I was hiring at a startup ~2 decades ago. It's amazing how much it sounds like LLM output—I would have absolutely flagged it as AI-generated if I saw it today.
    16. sfink ◴[] No.46209226[source]
    Yeah, I haven't used AIs enough to be that good at immediately spotting generated output, so I appreciate the chance to reconsider my assumption that something was human-written. I'm sure people who did NOT use an AI find it insulting to be so accused, but I'd rather normalize those accusations and shift the norm to see them as suspicions rather than accusations.

    I do find it more helpful when people specify why they think something was AI-generated. Especially since people are often wrong (fwict).

    17. notahacker ◴[] No.46209231[source]
    "Does anyone else hate those s̶c̶r̶o̶l̶l̶b̶a̶r̶s̶ ads/modals/unconventional page layout" is the archetypical HN response tbf, and often the most upvoted

    Also, I'm pretty sure most people can spot blogspam full of glaringly obvious cliche AI patterns without being able to create a high reliability AI detector. To set that as the threshold for commentary on whether an article might have been generated is akin to arguing that people shouldn't question the accuracy of a claim unless they've built an oracle or cracked lie detection.