←back to thread

882 points embedding-shape | 9 comments | | HN request time: 0.63s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. masfuerte ◴[] No.46206777[source]
Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.
replies(6): >>46207158 #>>46207351 #>>46207646 #>>46208995 #>>46209063 #>>46209147 #
2. rsync ◴[] No.46207158[source]
This is my view.

I tend to dislike these type of posts but a properly designed and functioning vote mechanism should take care of it.

If not, it is the voting mechanism that should be tuned - not new rules.

3. eskori ◴[] No.46207351[source]
If HN mods think the rule should be applied whatever the community thinks (for now), then yes, it needs a rule.

As I see it, down-voting is an expression of the community posture, rules are an expression of the "space" posture. It's up to the space to determine if there is something relevant enough to include it in the rules.

And again, as I see it, community should also have a way to at least suggest modifications of the rules.

I agree with you in "People who can't take a hint aren't going to read the rules". But as they say: "Ignorance of the law does not exempt one from compliance."

replies(1): >>46209064 #
4. dormento ◴[] No.46207646[source]
> These comments already get heavily down-voted.

Can't find the link right now (cause why would i save a thread like that..) but I've seen more than once situations where people get defensive of others that post AI slop comments. Both times it was people in YC companies that have personal interest related to AI. Both times it looked like a person defending sockpuppets.

5. al_borland ◴[] No.46208995[source]
I think it helps having guidelines and not relying on user sentiment alone. When I first joined HN I read the guidelines and it did make me alter my comments a bit. Hoping everyone who joins goes back to review the up/down votes on their comments and then take away the right lesson with limited information as to why those votes were received seems like wishful thinking. For those who do question why they keep getting downvoted, it might lead them to check the guidelines and finding the right supporting information would be useful.

A lot of the guidelines are about avoiding comments that aren’t interesting. A copy/paste from an LLM isn’t interesting.

6. notahacker ◴[] No.46209063[source]
I'm veering towards this being the answer. People downvote the superfluous "I don't have any particular thoughts on this, but here's what a chatbot has to say" comments all the time. But also, there are a lot of discussions around AI on HN, and in some of those cases posting verbatim responses from current generation chatbots is a pretty good indication of they can give accurate responses when posed problems of this type or they still make these mistakes or this is what happens when there's too much RHLF or a silly prompt...
7. tptacek ◴[] No.46209064[source]
Again: there already is a rule against this.
replies(1): >>46215640 #
8. BrtByte ◴[] No.46209147[source]
HN tends to self-regulate pretty well
9. eskori ◴[] No.46215640{3}[source]
Hi! Yup, I didn't know it and your comment talking about this (completely agree btw) was made later, so sorry if it felt repetitive to you but thanks for coming here to let us know :)