Most active commenters

    ←back to thread

    881 points embedding-shape | 14 comments | | HN request time: 0.2s | source | bottom

    As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

    While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

    Some examples:

    - https://news.ycombinator.com/item?id=46164360

    - https://news.ycombinator.com/item?id=46200460

    - https://news.ycombinator.com/item?id=46080064

    Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

    What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

    1. tpxl ◴[] No.46206706[source]
    I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.
    replies(5): >>46206818 #>>46207071 #>>46207682 #>>46208954 #>>46209066 #
    2. mattkrause ◴[] No.46206818[source]
    I do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).

    I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.

    replies(2): >>46206917 #>>46211576 #
    3. MBCook ◴[] No.46206917[source]
    That provides value as you’re comparing (and hopefully analyzing) output. It’s totally on topic.

    In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” adds absolutely nothing to the discussion but noise.

    4. Ekaros ◴[] No.46207071[source]
    I think "I googled this" can be valid and helpful contribution. For example looking up some statistic or fact or an year. If that is also verified and sanity checked.
    replies(3): >>46207184 #>>46207941 #>>46208453 #
    5. sejje ◴[] No.46207184[source]
    Yes, while citing an LLM in the same way is probably not as useful.

    "I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.

    The LLM skips a step, and gets you right to the "unusable source" argument.

    replies(1): >>46207242 #
    6. Ekaros ◴[] No.46207242{3}[source]
    I agree. Telling I googled this and someone has this opinion is pretty useless. Be that someone a LLM or random poster on internet.

    Still, I will fight that someone actually doing the leg work even by search engine and reasonable evaluation on a few sources is often quite valuable contribution. Sometimes even if it is done to discredit someone else.

    7. dormento ◴[] No.46207682[source]
    IMHO its far worse than "I googled this". Googling at least requires a modicum of understanding. Pasting slop usually means that the person couldn't be bothered to filter out garbage, but wants to look smart anyway.
    8. TulliusCicero ◴[] No.46207941[source]
    "I googled this" usually means actually going into a page and seeing what it says, not just copy-pasting the search results page itself, which is the equivalent here.
    9. skywhopper ◴[] No.46208453[source]
    In that case, the correct post here would be to say “here’s the stat” and cite the actual source (not “I googled it”), and then add some additional commentary.
    10. zby ◴[] No.46208954[source]
    The contribution is the prompt.
    11. tptacek ◴[] No.46209066[source]
    They are already banned.
    replies(1): >>46212489 #
    12. autoexec ◴[] No.46211576[source]
    It's always fun when people point out an LLMs insane responses to simple questions that shatter the illusion of them having any intelligence, but besides just giving us a good laugh when AI has a meltdown failing to produce a seahorse emoji, there are other times it might be valuable to discuss how they respond, such as when those responses might be dangerous, censored, or clearly being filled with advertising/bias
    13. venturecruelty ◴[] No.46212489[source]
    Weird that I keep seeing them then.
    replies(1): >>46212501 #
    14. tptacek ◴[] No.46212501{3}[source]
    That's what the "flag" button is for.