At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.