←back to thread

881 points embedding-shape | 5 comments | | HN request time: 0.77s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
gortok ◴[] No.46206694[source]
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
1. deadbabe ◴[] No.46207964[source]
On a similar sentiment, I’m sick and tired of people telling others to go google stuff.

The point of asking on a public forum is to get socially relatable human answers.

replies(4): >>46208000 #>>46208291 #>>46208902 #>>46209039 #
2. jedbrooke ◴[] No.46208291[source]
I’ve seen so many SO and other forum posts where the first comment is someone smugly saying “just google it, silly”.

Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply

replies(1): >>46209515 #
3. delecti ◴[] No.46208902[source]
Agreed, with a caveat. If someone is asking for an objective answer which could be easily found with a search, and hasn't indicated why they haven't taken that approach, it really comes across as laziness and offloading their work onto other people. Like, "what are the best restaurants in an area" is a good question for human input; "how do you deserialize a JSON payload" should include some explanation for what they've tried, including searches.
4. subscribed ◴[] No.46209039[source]
Yeah, but you get two extremes.

Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"

Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.

Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.

But sure, sometimes you get the other kind. Very rarely.

5. jquery ◴[] No.46209515[source]
Or worse, you google an obscure topic and the top reply is “apple mountain sleep blue chipmunk fart This comment was mass deleted with Redact” and the replies to that are all “thanks that solved my problem”