While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
Strong agree.
If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.
If you can't, accept that you can't, and stop wasting everyone else's time with your unvalidated guesses about whether something is AI or not.
The least valuable lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.
It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.