Most active commenters

    ←back to thread

    881 points embedding-shape | 13 comments | | HN request time: 0.821s | source | bottom

    As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

    While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

    Some examples:

    - https://news.ycombinator.com/item?id=46164360

    - https://news.ycombinator.com/item?id=46200460

    - https://news.ycombinator.com/item?id=46080064

    Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

    What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

    Show context
    gortok ◴[] No.46206694[source]
    While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

    At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

    replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
    sbrother ◴[] No.46206849[source]
    I strongly agree with this sentiment and I feel the same way.

    The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.

    replies(11): >>46206883 #>>46206949 #>>46206957 #>>46206964 #>>46207130 #>>46207590 #>>46208069 #>>46208723 #>>46209062 #>>46209658 #>>46211403 #
    1. parliament32 ◴[] No.46208069[source]
    > I'm not sure what the solution here

    The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!

    replies(4): >>46208718 #>>46208816 #>>46209455 #>>46209822 #
    2. akavi ◴[] No.46208718[source]
    You are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?

    (while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)

    replies(3): >>46208784 #>>46209225 #>>46213296 #
    3. swiftcoder ◴[] No.46208784[source]
    > it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT

    The objective of that model, however, is quite different to that of an LLM.

    4. smallerfish ◴[] No.46208816[source]
    Google Translate doesn't hold a candle to LLMs at translating between even common languages.
    5. parliament32 ◴[] No.46209225[source]
    I have seen Google Translate hallucinate exactly zero times over thousands of queries over the years. Meanwhile, LLMs emit garbage roughly 1/3 of the time, in my experience. Can you provide an example of Translate hallucinating something?
    replies(2): >>46210006 #>>46210018 #
    6. lurking_swe ◴[] No.46209455[source]
    IMO chatgpt is a much better translator. Especially if you’re using one of their normal models like 5.1. I’ve used it many times with an obscure and difficult slavic language that i’m fluent in for example, and chatgpt nailed it whereas google translate sounded less natural.

    The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”

    Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.

    I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.

    7. Kim_Bruning ◴[] No.46209822[source]
    Google translate used to be the best, but it's essentially outdated technology now, surpassed by even small open-weight multilingual LLMs.

    Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.

    * Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.

    * always proof-read if you are at all able to!

    Ultimately you should be responsible for your own posts.

    replies(2): >>46211117 #>>46211966 #
    8. Teever ◴[] No.46210006{3}[source]
    Every single time it mistranslates something it is hallucinations.
    9. lazide ◴[] No.46210018{3}[source]
    Agreed, and I use G translate daily to handle living in a country where 95% of the population doesn’t speak any language I do.

    It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.

    Palmada palmada bot.

    10. ◴[] No.46211117[source]
    11. gertlex ◴[] No.46211966[source]
    I haven't had a reason to use Google Translate in years, so will ask: Have they opted to not use/roll out modern LLM translation capabilities in the Google Translate product?
    replies(1): >>46212394 #
    12. deaux ◴[] No.46212394{3}[source]
    As of right now, correct.
    13. fouc ◴[] No.46213296[source]
    Google Translate hasn't moved to LLM-style translation yet, unfortunately