Most active commenters
  • zby(7)
  • (4)
  • subscribed(4)
  • Kim_Bruning(4)
  • sejje(3)
  • MarkusQ(3)
  • crazygringo(3)
  • dogleash(3)

←back to thread

882 points embedding-shape | 121 comments | | HN request time: 0.42s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. gortok ◴[] No.46206694[source]
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
2. sbrother ◴[] No.46206849[source]
I strongly agree with this sentiment and I feel the same way.

The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.

replies(11): >>46206883 #>>46206949 #>>46206957 #>>46206964 #>>46207130 #>>46207590 #>>46208069 #>>46208723 #>>46209062 #>>46209658 #>>46211403 #
3. justin66 ◴[] No.46206883[source]
As AIs get good enough, dealing with someone struggling with English will begin to feel like a breath of fresh air.
4. tensegrist ◴[] No.46206949[source]
one solution that appeals to me (and which i have myself used in online spaces where i don't speak the language) is to write in a language you can speak and let people translate it themselves however they wish

i don't think it is likely to catch on, though, outside of culturally multilingual environments

replies(1): >>46207031 #
5. AnimalMuppet ◴[] No.46206957[source]
Maybe they should say "AI used for translation only". And maybe us English speakers who don't care what AI "thinks" should still be tolerant of it for translations.
6. kps ◴[] No.46206964[source]
When I occasionally use MTL into a language I'm not fluent in, I say so. This makes the reader aware that there may be errors unknown to me that make the writing diverge from my intent.
replies(1): >>46207027 #
7. sejje ◴[] No.46206977[source]
This is the only reasonable take.

It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.

Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.

Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.

replies(2): >>46208966 #>>46209016 #
8. hotsauceror ◴[] No.46207007[source]
I agree with this sentiment.

When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

replies(7): >>46207092 #>>46207476 #>>46209024 #>>46209098 #>>46209421 #>>46210608 #>>46210884 #
9. sejje ◴[] No.46207027{3}[source]
I think multi -language forums with AI translators is a cool idea.

You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.

I think building it as a forum feature rather than a browser feature is maybe worth.

replies(2): >>46207070 #>>46208595 #
10. internetter ◴[] No.46207031{3}[source]
> i don't think it is likely to catch on, though, outside of culturally multilingual environments

It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.

https://blog.thms.uk/2023/02/mastodon-translation-options

11. pjerem ◴[] No.46207070{4}[source]
You know that this is the most hated feature of reddit ? (because the translations are shitty so maybe that can be improved)
replies(5): >>46207124 #>>46208037 #>>46209135 #>>46209613 #>>46212401 #
12. gardenhedge ◴[] No.46207092[source]
I disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted
replies(2): >>46207341 #>>46208634 #
13. sejje ◴[] No.46207124{5}[source]
I didn't, but I don't think it would work well on an established English-only forum.

It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.

I'm also open to the thought that it's a terrible idea.

replies(1): >>46208796 #
14. emaro ◴[] No.46207130[source]
Agreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.
replies(1): >>46208752 #
15. whimsicalism ◴[] No.46207266[source]
I think there's well done and usually unnoticeable and poorly done and insulting. I don't agree that the two are always the same, but I think lots of people might think they are doing the former but are not aware enough to realize they are doing the latter.
replies(1): >>46208391 #
16. OptionOfT ◴[] No.46207341{3}[source]
But I'm not interested in the AI's point of view. I have done that myself.

I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.

Your value is not in copy-pasting. It's in your experience.

replies(1): >>46208546 #
17. JeremyNT ◴[] No.46207476[source]
In a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."

It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.

(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)

I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.

replies(2): >>46208893 #>>46209959 #
18. guizadillas ◴[] No.46207590[source]
Non-native English speaker here:

Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language

replies(1): >>46208210 #
19. deadbabe ◴[] No.46207964[source]
On a similar sentiment, I’m sick and tired of people telling others to go google stuff.

The point of asking on a public forum is to get socially relatable human answers.

replies(4): >>46208000 #>>46208291 #>>46208902 #>>46209039 #
20. delfinom ◴[] No.46207981[source]
It's kinda funny how we once in internet culture had "lmgtfy" links because people weren't just searching google instead of asking questions.

But now people are vomiting chatgpt responses instead of linking to chatgpt.

replies(2): >>46209069 #>>46209615 #
21. ◴[] No.46208037{5}[source]
22. parliament32 ◴[] No.46208069[source]
> I'm not sure what the solution here

The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!

replies(4): >>46208718 #>>46208816 #>>46209455 #>>46209822 #
23. coffeefirst ◴[] No.46208210{3}[source]
Better yet, I prefer to read some unusual word choices from someone who’s clearly put a lot of work into learning English than a robot.
replies(3): >>46208397 #>>46208579 #>>46209330 #
24. Balgair ◴[] No.46208275[source]
Aside:

When someone says: "Source?", is that kinda the same thing?

Like, I'm just going to google the thing the person is asking for, same as they can.

Should asking for sources be banned too?

Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.

replies(2): >>46208850 #>>46211930 #
25. jedbrooke ◴[] No.46208291[source]
I’ve seen so many SO and other forum posts where the first comment is someone smugly saying “just google it, silly”.

Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply

replies(1): >>46209515 #
26. amelius ◴[] No.46208391[source]
"I asked AI and it said basically the same as you."
27. buildbot ◴[] No.46208397{4}[source]
Yep, it’s a 2 way learning street - you can learn new things from non native speakers, and they can learn from you as well. Any kind of auto Translation removed this. (It’s still important to have for non fluent people though!)
28. zby ◴[] No.46208494[source]
I strongly disagree - when I post something that AI wrote I am doing it because it explains my thoughts better than I can - it digs deeper and finds the support for intuitions that I cannot explain nicely. I quote the AI - because I feel this is fair - if you ban this you would just lose the information that it was generated.
replies(6): >>46208524 #>>46208541 #>>46208659 #>>46208877 #>>46209078 #>>46210738 #
29. simianparrot ◴[] No.46208524[source]
You have to be joking
30. dhosek ◴[] No.46208541[source]
Meh. Might as well encourage people to post links to search results then too.
replies(1): >>46209405 #
31. zby ◴[] No.46208546{4}[source]
What if I agree with what AI wrote? Should I try to hide that it was generated?
replies(2): >>46208717 #>>46208929 #
32. dhosek ◴[] No.46208579{4}[source]
Indeed, this sort of “writing with an accent” can illuminate interesting aspects of both English and the speakers’ native language that I find fascinating.
replies(1): >>46209699 #
33. debugnik ◴[] No.46208595{4}[source]
That's Twitter currently, in a way. I've seen and had short conversations in which each person speaks their own language and trusts the other to use the built-in translation feature.
34. JoshTriplett ◴[] No.46208634{3}[source]
If I wanted to consult an AI, I'd consult an AI. "I consulted an AI and pasted in its answer" is worse than worthless. "I consulted an AI and carefully checked the result" might have value.
35. SunshineTheCat ◴[] No.46208639[source]
I am just sad that I can no longer use em dashes without people immediately assuming what I wrote was AI. :(
replies(3): >>46208733 #>>46208747 #>>46209996 #
36. SunshineTheCat ◴[] No.46208659[source]
This is like saying "I use a motorized scooter at walmart, not because I can't walk, but because it 'walks' better than I can."
37. SoftTalker ◴[] No.46208676[source]
Agree and I think it might also be useful to have that be grounds for a shadowban if we start seeing this getting out of control. I'm not interested, even slightly, in what an LLM has to say about a thread on HN. If I see an account posting an obvious LLM copy/paste, I'm not interested in seeing anything from that account either. Maybe a warning on the first offense is fair, but it should not be tolerated or this site will just drown in the slop.
38. MarkusQ ◴[] No.46208717{5}[source]
Did you agree with it before the AI wrote it though (in which case, what was the point of involving the AI)?

If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.

replies(1): >>46209268 #
39. akavi ◴[] No.46208718{3}[source]
You are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?

(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)

replies(3): >>46208784 #>>46209225 #>>46213296 #
40. SAI_Peregrinus ◴[] No.46208723[source]
If I want to participate in a conversation in a language I don't understand I use machine translation. I include a disclaimer that I've used machine translation & hope that gets translated. I also include the input to the machine translator, so that if someone who understands both languages happens to read it they might notice any problems.
replies(2): >>46210153 #>>46210862 #
41. ◴[] No.46208733[source]
42. MarkusQ ◴[] No.46208747[source]
Go ahead, use em—let the haters stew in their own typographically-impoverished purgatory.
43. neltnerb ◴[] No.46208750[source]
I think what's important here is to reduce harm even if it's still a little annoying. Because if you try to completely ban mentioning something is LLM written you'll just have people doing it without a disclaimer...

Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!

Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.

LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.

44. SoftTalker ◴[] No.46208752{3}[source]
I honestly think that very few people here are completely non-conversant in English. For better or worse, it's the dominant language. Amost everyone who doesn't speak English natively learns it in school.

I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.

replies(1): >>46211903 #
45. swiftcoder ◴[] No.46208784{4}[source]
> it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT

The objective of that model, however, is quite different to that of an LLM.

46. monerozcash ◴[] No.46208796{6}[source]
I think the audience that would be interested in this is vanishingly small, there exist relatively few conversations online that would be meaningfully improved by this.

I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.

47. smallerfish ◴[] No.46208816{3}[source]
Google Translate doesn't hold a candle to LLMs at translating between even common languages.
48. officeplant ◴[] No.46208850[source]
>Should asking for sources be banned too?

IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.

But yes it is rude to just respond "source?" unless they are making some wild batshit claims.

49. officeplant ◴[] No.46208877[source]
> if you ban this you would just lose the information that it was generated.

The argument is that the information it generated is just noise, and not valuable to the conversation thread at all.

50. ◴[] No.46208883[source]
51. lanstin ◴[] No.46208893{3}[source]
Yeah if the person doing it is smart I would trust they had the reasonable prompt and ruled out flagrant BS answers. Sometimes the key thing is just to know the name of the thing for the answer. It's equally as good/annoying as reporting what Google search gives for the answer. I guess I assume mostly people will do the AI query/search and then decide to share the answer based on how good or useful it seems.
52. delecti ◴[] No.46208902[source]
Agreed, with a caveat. If someone is asking for an objective answer which could be easily found with a search, and hasn't indicated why they haven't taken that approach, it really comes across as laziness and offloading their work onto other people. Like, "what are the best restaurants in an area" is a good question for human input; "how do you deserialize a JSON payload" should include some explanation for what they've tried, including searches.
53. subscribed ◴[] No.46208929{5}[source]
No, but this is different.

"I asked an $LLM and it said" is very different than "in my opinion".

Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.

replies(1): >>46209340 #
54. fwip ◴[] No.46208966[source]
I'd love to see that for article submissions, as well.
55. manmal ◴[] No.46209016[source]
What LLM generate is an amalgamation of human content they have been trained on. I get that you want what actual humans think, but that’s also basically a weighted amalgamation. Real, actual insight, is incredibly rare and I doubt you see much of it on HN (sorry guys; I’ll live with the downvotes).
replies(2): >>46210008 #>>46210021 #
56. giancarlostoro ◴[] No.46209024[source]
You can have the same problem with Googling things, LLMs usually form conclusions I align with when I do the independent research. Google isn't anywhere near as good as it was 5 years ago. All the years of crippling their search ranking system and suppressing results has caught up to them to the point most LLMs are Google replacements.
57. subscribed ◴[] No.46209039[source]
Yeah, but you get two extremes.

Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"

Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.

Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.

But sure, sometimes you get the other kind. Very rarely.

58. jampa ◴[] No.46209062[source]
I wrote about this recently. You need to prompt better if you don't want AI to flatten your original tone into corporate speak:

https://jampauchoa.substack.com/p/writing-with-ai-without-th...

TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.

59. subscribed ◴[] No.46209069[source]
No, linking to chatgpt is not a response. For some sort of questions it (which model exactly is it?) might be better, for some might be worse.
60. i80and ◴[] No.46209078[source]
This is... I'll go with "dystopian". If you're not sure you can properly explain an idea, you should think about it more deeply.
replies(2): >>46209202 #>>46209316 #
61. ndsipa_pomu ◴[] No.46209098[source]
To my mind, it's like someone saying "I asked Fred down at the pub and he said...". It's someone stupidly repeating something that's likely stupid anyway.
62. BrtByte ◴[] No.46209129[source]
HN is the mix of personal experience, weird edge cases, and even the occasional hot take. That's what makes HN valuable
63. subscribed ◴[] No.46209135{5}[source]
OTOH I am participating in a wonderful discord server community, primarily Italians and Brazilians, with other nationalities sprinkled in.

We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.

Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.

We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.

I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.

64. that_guy_iain ◴[] No.46209200[source]
There will be many cases you won't even notice. When people know how to use AI to help with their writing, it's not noticable.
65. jon-wood ◴[] No.46209202{3}[source]
Or simply not participate in that conversation. It’s not obligatory to have an opinion on all subjects.
replies(1): >>46209384 #
66. parliament32 ◴[] No.46209225{4}[source]
I have seen Google Translate hallucinate exactly zero times over thousands of queries over the years. Meanwhile, LLMs emit garbage roughly 1/3 of the time, in my experience. Can you provide an example of Translate hallucinating something?
replies(2): >>46210006 #>>46210018 #
67. zby ◴[] No.46209268{6}[source]
Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details, sometimes showing new ways.

I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?

replies(2): >>46209552 #>>46209991 #
68. zby ◴[] No.46209316{3}[source]
Why? This is like saying that you should not use a car - because you should walk. Sometimes yes - but as a general rule?
69. delaminator ◴[] No.46209329[source]
While, I don't disagree with the general sentiment, a black and white ban leaves no room for nuance.

I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.

replies(1): >>46211929 #
70. RankingMember ◴[] No.46209330{4}[source]
100%! I will always give the benefit of the doubt when I see odd syntax/grammar (and do my best to provide helpful correction if it's off-base to the extent that it muddies your point), but hit me with a wordy, em-dash battered pile of gobbledygook and you might as well be spitting in my face.
71. danielmarkbruce ◴[] No.46209332[source]
And yet people ask for sources all the time. "I don't care what you think, show me what someone else thinks".
72. zby ◴[] No.46209340{6}[source]
I don't know - the linked examples were low quality - sure.
73. zby ◴[] No.46209384{4}[source]
I thought that the point was to post valuable thoughts - because it is interesting to read them. But now you suggest that it depends on how they were generated.
replies(2): >>46211723 #>>46215683 #
74. zby ◴[] No.46209405{3}[source]
I like when someone links to where he found the information.
75. crazygringo ◴[] No.46209416[source]
I actually disagree, in certain cases. Just today I saw:

https://news.ycombinator.com/item?id=46204895

when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.

When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.

If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.

replies(2): >>46209828 #>>46210076 #
76. MetaWhirledPeas ◴[] No.46209421[source]
> When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.

If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.

replies(1): >>46209708 #
77. lurking_swe ◴[] No.46209455{3}[source]
IMO chatgpt is a much better translator. Especially if you’re using one of their normal models like 5.1. I’ve used it many times with an obscure and difficult slavic language that i’m fluent in for example, and chatgpt nailed it whereas google translate sounded less natural.

The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”

Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.

I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.

78. jquery ◴[] No.46209515{3}[source]
Or worse, you google an obscure topic and the top reply is “apple mountain sleep blue chipmunk fart This comment was mass deleted with Redact” and the replies to that are all “thanks that solved my problem”
79. MarkusQ ◴[] No.46209552{7}[source]
> why would you prefer my writing from an LLM generated one?

Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.

80. tjoff ◴[] No.46209613{5}[source]
Reddit would be even worse if the translations were better, now you don't have to waste much time because it hits you right in the face. Never ever translate something without asking about it first.

When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world with crappy translation instead.

81. TheAdamist ◴[] No.46209615[source]
Same acronym still works, just swap gemini in place of google.
82. estebarb ◴[] No.46209658[source]
I have found that prompting "translate my text to English, do not change anything else" works fine.

However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.

83. VBprogrammer ◴[] No.46209699{5}[source]
Yeah, the German speakers I work with often say "Can you do this until [some deadline]?" When they mean "can you complete this by [some deadline]?"

Its common enough that it must be a literal translation difference between German and English.

84. dogleash ◴[] No.46209708{3}[source]
I am amused by the defeatism in your response that expecting anyone to actually try anymore is a lost cause.
replies(3): >>46209756 #>>46210293 #>>46210631 #
85. MetaWhirledPeas ◴[] No.46209756{4}[source]
> expecting anyone to actually try anymore is a lost cause

Well now you're putting words in my mouth.

If you make it against the rules to cite AI in your replies then you end up with people masking their AI usage, and you'll never again be able to encourage them to do the legwork themselves.

86. Kim_Bruning ◴[] No.46209822{3}[source]
Google translate used to be the best, but it's essentially outdated technology now, surpassed by even small open-weight multilingual LLMs.

Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.

* Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.

* always proof-read if you are at all able to!

Ultimately you should be responsible for your own posts.

replies(2): >>46211117 #>>46211966 #
87. zacmps ◴[] No.46209828[source]
LLM summaries of papers often make overly broad claims [1].

I don't think this is a good example personally.

[1] https://arxiv.org/abs/2504.00025

replies(1): >>46210325 #
88. dogleash ◴[] No.46209959{3}[source]
> can actually be pretty useful. It indicates somebody already minimally investigated a thing

Every time this happens to me at work one of two things happens:

1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.

2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.

89. swampangel ◴[] No.46209991{7}[source]
> Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details

> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?

Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.

Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?

replies(1): >>46211773 #
90. dinkleberg ◴[] No.46209996[source]
Some will blindly dismiss anything using them as AI generated, but realistically the em-dash is only one sign among many. Way more obvious is the actual style of the writing. I use Claude all of the time and I can instantly tell if a blog post I’m reading was written with Claude. It is so distinctive. People use some of the patterns it uses some of the time. But it uses all of them all of the time.
replies(1): >>46211654 #
91. Teever ◴[] No.46210006{5}[source]
Every single time it mistranslates something it is hallucinations.
92. dogleash ◴[] No.46210008{3}[source]
I'm downvoting exclusively for your comment about downvotes.
93. lazide ◴[] No.46210018{5}[source]
Agreed, and I use G translate daily to handle living in a country where 95% of the population doesn’t speak any language I do.

It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.

Palmada palmada bot.

94. dinkleberg ◴[] No.46210021{3}[source]
Why do you suppose we come to HN if not for actual insight? There are other sites much better for getting an endless stream of weighted amalgamations of human content.
replies(2): >>46210320 #>>46211304 #
95. Rarebox ◴[] No.46210076[source]
That's a pretty good example. The summary is actually useful, yet it still annoys me.

But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.

One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.

replies(1): >>46210463 #
96. MLgulabio ◴[] No.46210153{3}[source]
You are joking right?

I mean we probably don't talk about someone not knowing english at all, that wouldn't make sense but i'm german and i probably write german.

I would often enough tell some LLM to clean up my writing (not on hn, sry i'm to lazy for hn)

97. chatmasta ◴[] No.46210293{4}[source]
If someone is asking a technical question along the lines of “how does this work” or “can I do this,” then I’d expect them to Google it first. Nowadays I’d also expect them to ask ChatGPT. So I’d appreciate their preamble explaining that they already did that, and giving me the chance to say “yep, ChatGPT is basically right, but there’s some nuance about X, Y, and Z…”
98. ergonaught ◴[] No.46210320{4}[source]
Coming here for insight does not in any way demonstrate that genuine insight is actually widely available here.
99. crazygringo ◴[] No.46210325{3}[source]
When there's nothing else to go on, it's still more useful than nothing.

The story was being upvoted and on the front page, but with no substantive comments, clearly because nobody understood what the significance of the paper was supposed to be.

I mean, HN comments are wrong all the time too. But if an LLM summary can at least start the conversation, I'm not really worried if its summary isn't 100% faithful.

100. crazygringo ◴[] No.46210463{3}[source]
I definitely read the comments to learn. I love when there's a post about something I didn't know about, and I love when HN'ers can explain details that the post left confusing.

If I'm looking for entertainment, HN is not exactly my first stop... :P

101. mikkupikku ◴[] No.46210608[source]
These days, most people who try googling for answers end up reading an article which was generated by AI anyway. At least if you go right to the bot, you know what you're getting.
102. mikkupikku ◴[] No.46210631{4}[source]
Expecting people to stop asking casual questions to LLMs is definitely a lost cause. This tech isn't going anywhere, no matter how much you dislike it.
103. pc86 ◴[] No.46210738[source]
If an LLM writes better than you do, you need to take a long look in the mirror and figure what you can do to fix that, because it's not a good thing.
104. KaiserPro ◴[] No.46210862{3}[source]
You are adding your comments and translating them, thats fine.

If it was just a translation, then that adds no value.

105. KaiserPro ◴[] No.46210884[source]
"lets ask the dipshit" is how my colleague phrases it
106. ◴[] No.46211117{4}[source]
107. manmal ◴[] No.46211304{4}[source]
It’s obviously an amalgamation that’s weighted in favor of your interests.
108. carsoon ◴[] No.46211403[source]
I think even when this is used they should include "(translated by llm)" for transparency. When you use a intermediate layer there is always bias.

I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.

If a user doesn't speak a language well, they won't know whether their meanings were altered.

109. Semiapies ◴[] No.46211449[source]
It's at least a factor in why I value HN commentary so much less than I used to.
110. Kim_Bruning ◴[] No.46211654{3}[source]
You're absolutely right. No wonder you can recognize it so easily. Let me just sit with that.

edit 1: The sincerest form of flattery

edit 2: To be fair, Claude Opus 4.5 seems to encourage people to be nicer to each other if you let them.

111. 12_throw_away ◴[] No.46211723{5}[source]
> post valuable thoughts

This is neither the mechanism nor the goal of human communication, not even on the internet.

112. Kim_Bruning ◴[] No.46211773{8}[source]
You could instead ask Kimi K2 to demolish your point instead, and you may have to hold it back from insulting your mom in the ps.

Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.

Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.

Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.

113. ferngodfather ◴[] No.46211831[source]
Yeah like if I wanted to know what a particular AI says, I'd have asked it..
114. Wowfunhappy ◴[] No.46211903{4}[source]
...I'm not sure I agree. I sometimes have a lot of trouble understanding what non-English speakers are trying to say. I appreciate that they're doing their best, and as someone who can only speak English, I have the utmost respect anyone who knows multiple languages—but I just find it really hard.

Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.

115. stephen_g ◴[] No.46211929[source]
But if I wanted to ask an AI I would put that into ChatGPT, not ask HN. I would only ask that on HN if I wanted other people's opinions!

You could reply with "Hey you could ask [particular LLM] because it had some good points when I asked it" but I don't care to see LLM output regurgitated on HN ever.

116. Kim_Bruning ◴[] No.46211930[source]
I actually use LLMs to help me dig up the sources. It's quicker than google and you get them nicely formatted besides.

But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.

But yeah, using LLMs to help with actually doing the research? Totally a thing.

117. gertlex ◴[] No.46211966{4}[source]
I haven't had a reason to use Google Translate in years, so will ask: Have they opted to not use/roll out modern LLM translation capabilities in the Google Translate product?
replies(1): >>46212394 #
118. deaux ◴[] No.46212394{5}[source]
As of right now, correct.
119. Terr_ ◴[] No.46212401{5}[source]
I think we should distinguish between the feature being good/hated:

1. An automatic translation feature.

2. Being able to submit an "original language" version of a post in case the translation is bad/unavailable, or someone can read the original for more nuance.

The only problem I see with #2 involves malicious usage, where the author is out to deliberately sow confusion/outrage or trying to evade moderation by presenting fundamentally different messages.

120. fouc ◴[] No.46213296{4}[source]
Google Translate hasn't moved to LLM-style translation yet, unfortunately
121. jon-wood ◴[] No.46215683{5}[source]
Yeah, but if you're having to turn to a machine to compose your thoughts on a subject they're probably not that valuable. In an online community like this the interesting (not necessarily valuable) thoughts are the ones that come from personal experience, and raise the non-obvious points that an LLM is never going to come up with.