Most active commenters

    ←back to thread

    882 points embedding-shape | 43 comments | | HN request time: 1.276s | source | bottom

    As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

    While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

    Some examples:

    - https://news.ycombinator.com/item?id=46164360

    - https://news.ycombinator.com/item?id=46200460

    - https://news.ycombinator.com/item?id=46080064

    Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

    What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

    Show context
    gortok ◴[] No.46206694[source]
    While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

    At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

    replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
    1. sbrother ◴[] No.46206849[source]
    I strongly agree with this sentiment and I feel the same way.

    The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.

    replies(11): >>46206883 #>>46206949 #>>46206957 #>>46206964 #>>46207130 #>>46207590 #>>46208069 #>>46208723 #>>46209062 #>>46209658 #>>46211403 #
    2. justin66 ◴[] No.46206883[source]
    As AIs get good enough, dealing with someone struggling with English will begin to feel like a breath of fresh air.
    3. tensegrist ◴[] No.46206949[source]
    one solution that appeals to me (and which i have myself used in online spaces where i don't speak the language) is to write in a language you can speak and let people translate it themselves however they wish

    i don't think it is likely to catch on, though, outside of culturally multilingual environments

    replies(1): >>46207031 #
    4. AnimalMuppet ◴[] No.46206957[source]
    Maybe they should say "AI used for translation only". And maybe us English speakers who don't care what AI "thinks" should still be tolerant of it for translations.
    5. kps ◴[] No.46206964[source]
    When I occasionally use MTL into a language I'm not fluent in, I say so. This makes the reader aware that there may be errors unknown to me that make the writing diverge from my intent.
    replies(1): >>46207027 #
    6. sejje ◴[] No.46207027[source]
    I think multi -language forums with AI translators is a cool idea.

    You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.

    I think building it as a forum feature rather than a browser feature is maybe worth.

    replies(2): >>46207070 #>>46208595 #
    7. internetter ◴[] No.46207031[source]
    > i don't think it is likely to catch on, though, outside of culturally multilingual environments

    It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.

    https://blog.thms.uk/2023/02/mastodon-translation-options

    8. pjerem ◴[] No.46207070{3}[source]
    You know that this is the most hated feature of reddit ? (because the translations are shitty so maybe that can be improved)
    replies(5): >>46207124 #>>46208037 #>>46209135 #>>46209613 #>>46212401 #
    9. sejje ◴[] No.46207124{4}[source]
    I didn't, but I don't think it would work well on an established English-only forum.

    It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.

    I'm also open to the thought that it's a terrible idea.

    replies(1): >>46208796 #
    10. emaro ◴[] No.46207130[source]
    Agreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.
    replies(1): >>46208752 #
    11. guizadillas ◴[] No.46207590[source]
    Non-native English speaker here:

    Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language

    replies(1): >>46208210 #
    12. ◴[] No.46208037{4}[source]
    13. parliament32 ◴[] No.46208069[source]
    > I'm not sure what the solution here

    The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!

    replies(4): >>46208718 #>>46208816 #>>46209455 #>>46209822 #
    14. coffeefirst ◴[] No.46208210[source]
    Better yet, I prefer to read some unusual word choices from someone who’s clearly put a lot of work into learning English than a robot.
    replies(3): >>46208397 #>>46208579 #>>46209330 #
    15. buildbot ◴[] No.46208397{3}[source]
    Yep, it’s a 2 way learning street - you can learn new things from non native speakers, and they can learn from you as well. Any kind of auto Translation removed this. (It’s still important to have for non fluent people though!)
    16. dhosek ◴[] No.46208579{3}[source]
    Indeed, this sort of “writing with an accent” can illuminate interesting aspects of both English and the speakers’ native language that I find fascinating.
    replies(1): >>46209699 #
    17. debugnik ◴[] No.46208595{3}[source]
    That's Twitter currently, in a way. I've seen and had short conversations in which each person speaks their own language and trusts the other to use the built-in translation feature.
    18. akavi ◴[] No.46208718[source]
    You are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?

    (while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)

    replies(3): >>46208784 #>>46209225 #>>46213296 #
    19. SAI_Peregrinus ◴[] No.46208723[source]
    If I want to participate in a conversation in a language I don't understand I use machine translation. I include a disclaimer that I've used machine translation & hope that gets translated. I also include the input to the machine translator, so that if someone who understands both languages happens to read it they might notice any problems.
    replies(2): >>46210153 #>>46210862 #
    20. SoftTalker ◴[] No.46208752[source]
    I honestly think that very few people here are completely non-conversant in English. For better or worse, it's the dominant language. Amost everyone who doesn't speak English natively learns it in school.

    I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.

    replies(1): >>46211903 #
    21. swiftcoder ◴[] No.46208784{3}[source]
    > it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT

    The objective of that model, however, is quite different to that of an LLM.

    22. monerozcash ◴[] No.46208796{5}[source]
    I think the audience that would be interested in this is vanishingly small, there exist relatively few conversations online that would be meaningfully improved by this.

    I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.

    23. smallerfish ◴[] No.46208816[source]
    Google Translate doesn't hold a candle to LLMs at translating between even common languages.
    24. jampa ◴[] No.46209062[source]
    I wrote about this recently. You need to prompt better if you don't want AI to flatten your original tone into corporate speak:

    https://jampauchoa.substack.com/p/writing-with-ai-without-th...

    TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.

    25. subscribed ◴[] No.46209135{4}[source]
    OTOH I am participating in a wonderful discord server community, primarily Italians and Brazilians, with other nationalities sprinkled in.

    We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.

    Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.

    We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.

    I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.

    26. parliament32 ◴[] No.46209225{3}[source]
    I have seen Google Translate hallucinate exactly zero times over thousands of queries over the years. Meanwhile, LLMs emit garbage roughly 1/3 of the time, in my experience. Can you provide an example of Translate hallucinating something?
    replies(2): >>46210006 #>>46210018 #
    27. RankingMember ◴[] No.46209330{3}[source]
    100%! I will always give the benefit of the doubt when I see odd syntax/grammar (and do my best to provide helpful correction if it's off-base to the extent that it muddies your point), but hit me with a wordy, em-dash battered pile of gobbledygook and you might as well be spitting in my face.
    28. lurking_swe ◴[] No.46209455[source]
    IMO chatgpt is a much better translator. Especially if you’re using one of their normal models like 5.1. I’ve used it many times with an obscure and difficult slavic language that i’m fluent in for example, and chatgpt nailed it whereas google translate sounded less natural.

    The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”

    Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.

    I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.

    29. tjoff ◴[] No.46209613{4}[source]
    Reddit would be even worse if the translations were better, now you don't have to waste much time because it hits you right in the face. Never ever translate something without asking about it first.

    When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world with crappy translation instead.

    30. estebarb ◴[] No.46209658[source]
    I have found that prompting "translate my text to English, do not change anything else" works fine.

    However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.

    31. VBprogrammer ◴[] No.46209699{4}[source]
    Yeah, the German speakers I work with often say "Can you do this until [some deadline]?" When they mean "can you complete this by [some deadline]?"

    Its common enough that it must be a literal translation difference between German and English.

    32. Kim_Bruning ◴[] No.46209822[source]
    Google translate used to be the best, but it's essentially outdated technology now, surpassed by even small open-weight multilingual LLMs.

    Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.

    * Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.

    * always proof-read if you are at all able to!

    Ultimately you should be responsible for your own posts.

    replies(2): >>46211117 #>>46211966 #
    33. Teever ◴[] No.46210006{4}[source]
    Every single time it mistranslates something it is hallucinations.
    34. lazide ◴[] No.46210018{4}[source]
    Agreed, and I use G translate daily to handle living in a country where 95% of the population doesn’t speak any language I do.

    It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.

    Palmada palmada bot.

    35. MLgulabio ◴[] No.46210153[source]
    You are joking right?

    I mean we probably don't talk about someone not knowing english at all, that wouldn't make sense but i'm german and i probably write german.

    I would often enough tell some LLM to clean up my writing (not on hn, sry i'm to lazy for hn)

    36. KaiserPro ◴[] No.46210862[source]
    You are adding your comments and translating them, thats fine.

    If it was just a translation, then that adds no value.

    37. ◴[] No.46211117{3}[source]
    38. carsoon ◴[] No.46211403[source]
    I think even when this is used they should include "(translated by llm)" for transparency. When you use a intermediate layer there is always bias.

    I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.

    If a user doesn't speak a language well, they won't know whether their meanings were altered.

    39. Wowfunhappy ◴[] No.46211903{3}[source]
    ...I'm not sure I agree. I sometimes have a lot of trouble understanding what non-English speakers are trying to say. I appreciate that they're doing their best, and as someone who can only speak English, I have the utmost respect anyone who knows multiple languages—but I just find it really hard.

    Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.

    40. gertlex ◴[] No.46211966{3}[source]
    I haven't had a reason to use Google Translate in years, so will ask: Have they opted to not use/roll out modern LLM translation capabilities in the Google Translate product?
    replies(1): >>46212394 #
    41. deaux ◴[] No.46212394{4}[source]
    As of right now, correct.
    42. Terr_ ◴[] No.46212401{4}[source]
    I think we should distinguish between the feature being good/hated:

    1. An automatic translation feature.

    2. Being able to submit an "original language" version of a post in case the translation is bad/unavailable, or someone can read the original for more nuance.

    The only problem I see with #2 involves malicious usage, where the author is out to deliberately sow confusion/outrage or trying to evade moderation by presenting fundamentally different messages.

    43. fouc ◴[] No.46213296{3}[source]
    Google Translate hasn't moved to LLM-style translation yet, unfortunately