At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.
i don't think it is likely to catch on, though, outside of culturally multilingual environments
It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.
Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.
Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.
I think building it as a forum feature rather than a browser feature is maybe worth.
It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.
It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.
I'm also open to the thought that it's a terrible idea.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.
Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language
But now people are vomiting chatgpt responses instead of linking to chatgpt.
The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply
If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.
(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)
Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!
Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.
LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.
I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.
The objective of that model, however, is quite different to that of an LLM.
I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.
IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.
But yes it is rude to just respond "source?" unless they are making some wild batshit claims.
The argument is that the information it generated is just noise, and not valuable to the conversation thread at all.
"I asked an $LLM and it said" is very different than "in my opinion".
Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.
Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"
Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.
Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.
But sure, sometimes you get the other kind. Very rarely.
https://jampauchoa.substack.com/p/writing-with-ai-without-th...
TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.
We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.
Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.
We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.
I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.
I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.
https://news.ycombinator.com/item?id=46204895
when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.
When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.
If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.
I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.
If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.
The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”
Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.
I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.
Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.
When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world with crappy translation instead.
However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.
Its common enough that it must be a literal translation difference between German and English.
Well now you're putting words in my mouth.
If you make it against the rules to cite AI in your replies then you end up with people masking their AI usage, and you'll never again be able to encourage them to do the legwork themselves.
Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.
* Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.
* always proof-read if you are at all able to!
Ultimately you should be responsible for your own posts.
I don't think this is a good example personally.
Every time this happens to me at work one of two things happens:
1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.
2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.
> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.
Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?
It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.
Palmada palmada bot.
But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.
One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.
I mean we probably don't talk about someone not knowing english at all, that wouldn't make sense but i'm german and i probably write german.
I would often enough tell some LLM to clean up my writing (not on hn, sry i'm to lazy for hn)
The story was being upvoted and on the front page, but with no substantive comments, clearly because nobody understood what the significance of the paper was supposed to be.
I mean, HN comments are wrong all the time too. But if an LLM summary can at least start the conversation, I'm not really worried if its summary isn't 100% faithful.
If I'm looking for entertainment, HN is not exactly my first stop... :P
I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.
If a user doesn't speak a language well, they won't know whether their meanings were altered.
edit 1: The sincerest form of flattery
edit 2: To be fair, Claude Opus 4.5 seems to encourage people to be nicer to each other if you let them.
This is neither the mechanism nor the goal of human communication, not even on the internet.
Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.
Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.
Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.
Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.
You could reply with "Hey you could ask [particular LLM] because it had some good points when I asked it" but I don't care to see LLM output regurgitated on HN ever.
But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.
But yeah, using LLMs to help with actually doing the research? Totally a thing.
1. An automatic translation feature.
2. Being able to submit an "original language" version of a post in case the translation is bad/unavailable, or someone can read the original for more nuance.
The only problem I see with #2 involves malicious usage, where the author is out to deliberately sow confusion/outrage or trying to evade moderation by presenting fundamentally different messages.