←back to thread

881 points embedding-shape | 2 comments | | HN request time: 0.001s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
flkiwi ◴[] No.46208295[source]
I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.

replies(9): >>46208592 #>>46208859 #>>46209151 #>>46209987 #>>46210530 #>>46210557 #>>46210638 #>>46210955 #>>46211367 #
1. icoder ◴[] No.46210638[source]
Totally agree if the AI or search results are a (relatively) direct answer to the question.

But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'

replies(1): >>46214328 #
2. 0manrho ◴[] No.46214328[source]
> ($AI) suggests

Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.

That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.

It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.