←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.194s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. cwmoore ◴[] No.46210564[source]
Yes. Embarrassing cringe, whether or not it is noted.

But this is a text-only forum and text (to a degree, all digital content) has become compromised. Intent and message is not attributable to real life experience or effort. For the moment I have accepted the additional overhead.

As with most, I have a habit of estimating the validity of expertise in comments, and experiential biases, but that is becoming untenable.

Perhaps there will soon be transformer features that produce prompts adequate to the task of reproducing the thought behind each thread, so their actual value, informational complexity, humor, and salience, may be compared?

Though many obviously human commentors are actually inferior to answers from “let me chatgpt that for you.”

I have had healthy suspicions for a while now.