←back to thread

881 points embedding-shape | 3 comments | | HN request time: 0.222s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
WesolyKubeczek ◴[] No.46206991[source]
Yes. If I wanted an LLM’s opinion, I would have asked it myself.
replies(1): >>46207221 #
1. newsoftheday ◴[] No.46207221[source]
Would your prompt have been identical and produced identical results, today, tomorrow, which version of AI would you have used, were there bugs present that made the post or comment interesting that would have been absent in your response because the bug had been fixed already?
replies(2): >>46208089 #>>46211771 #
2. WesolyKubeczek ◴[] No.46208089[source]
In any case, it should have some more thought to it, some summary, some highlight, what you find useful/insightful about it. Just dumping the response is lazy and disrespectful.

And if two people can get two opposite results by giving the same prompt which asks a very specific question to the same model, it looks like bunk anyway. LLMs don't care if they are correct.

3. nobody9999 ◴[] No.46211771[source]
>Would your prompt have been identical and produced identical results, today, tomorrow, which version of AI would you have used, were there bugs present that made the post or comment interesting that would have been absent in your response because the bug had been fixed already?

Why is that relevant to GP's point?

I can't speak for anyone else, but I come to HN to discuss stuff with other humans. If I wanted an LLM's (it's not AI, it's a predictive text algorithm) regurgitations, I can generate those myself and don't need "helpful" HNers to do it for me unasked.

When I come here I want to have a discussion with other sentient beings, not the gestalt of training data regurgitated by a bot.

Perhaps that makes me old-fashioned and/or bigoted against interacting with large language models, but that's what I want.

In discussion, I want to know what other sentient beings think, not an aggregation of text tokens based on their probability of being used in a particular sequence determined by the data fed to model.

The former can (but may well not be) a creative, intellectual act by a sentient being. The latter will never be so, as it's an aggregation of existing data/information as a sequence of tokens cobbled together based on the frequency with which such tokens are used in a particular order in the model's corpus.

That's not to say that LLM are useless. They are not. But their place is not in "curious conversation," IMNSHO.