←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.205s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. alwa ◴[] No.46208730[source]
I tend to trust the voting system to separate the wheat from the chaff. If I were to try and draw a line, though, I’d start at the foundation: leave room for things that add value, avoid contributions that don’t. I’d suggest that line might be somewhere like “please don’t quote LLMs directly unless you can identify the specific value you’re adding above and beyond.” Or “…unless you’re adding original context or using them in a way that’s somehow non-obvious.”

Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”

Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”

And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”

There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.

[0] https://americanart.si.edu/blog/andrew-clemens-sand-art