At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.
But yeah, using LLMs to help with actually doing the research? Totally a thing.