At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.
But yes it is rude to just respond "source?" unless they are making some wild batshit claims.