While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
Why is that relevant to GP's point?
I can't speak for anyone else, but I come to HN to discuss stuff with other humans. If I wanted an LLM's (it's not AI, it's a predictive text algorithm) regurgitations, I can generate those myself and don't need "helpful" HNers to do it for me unasked.
When I come here I want to have a discussion with other sentient beings, not the gestalt of training data regurgitated by a bot.
Perhaps that makes me old-fashioned and/or bigoted against interacting with large language models, but that's what I want.
In discussion, I want to know what other sentient beings think, not an aggregation of text tokens based on their probability of being used in a particular sequence determined by the data fed to model.
The former can (but may well not be) a creative, intellectual act by a sentient being. The latter will never be so, as it's an aggregation of existing data/information as a sequence of tokens cobbled together based on the frequency with which such tokens are used in a particular order in the model's corpus.
That's not to say that LLM are useless. They are not. But their place is not in "curious conversation," IMNSHO.