←back to thread

882 points embedding-shape | 1 comments | | HN request time: 0s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
gortok ◴[] No.46206694[source]
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
sbrother ◴[] No.46206849[source]
I strongly agree with this sentiment and I feel the same way.

The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.

replies(11): >>46206883 #>>46206949 #>>46206957 #>>46206964 #>>46207130 #>>46207590 #>>46208069 #>>46208723 #>>46209062 #>>46209658 #>>46211403 #
emaro ◴[] No.46207130[source]
Agreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.
replies(1): >>46208752 #
SoftTalker ◴[] No.46208752[source]
I honestly think that very few people here are completely non-conversant in English. For better or worse, it's the dominant language. Amost everyone who doesn't speak English natively learns it in school.

I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.

replies(1): >>46211903 #
1. Wowfunhappy ◴[] No.46211903{3}[source]
...I'm not sure I agree. I sometimes have a lot of trouble understanding what non-English speakers are trying to say. I appreciate that they're doing their best, and as someone who can only speak English, I have the utmost respect anyone who knows multiple languages—but I just find it really hard.

Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.