Most active commenters

    ←back to thread

    693 points jsheard | 21 comments | | HN request time: 0.738s | source | bottom
    Show context
    AnEro ◴[] No.45093447[source]
    I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
    replies(8): >>45093755 #>>45093831 #>>45094062 #>>45094915 #>>45095210 #>>45095704 #>>45097171 #>>45097177 #
    geerlingguy ◴[] No.45093831[source]
    I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

    Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

    Trust, but verify is all the more relevant today. Except I would discount the trust, even.

    replies(8): >>45093911 #>>45094040 #>>45094155 #>>45094750 #>>45097691 #>>45098969 #>>45100795 #>>45107694 #
    1. Aurornis ◴[] No.45094040[source]
    > I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,

    A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.

    When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

    The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.

    replies(4): >>45094727 #>>45094762 #>>45095823 #>>45096892 #
    2. lawlessone ◴[] No.45094727[source]
    see it with comments here sometimes , "i asked chatgpt about Y" , really annoying, we all could have asked chatgpt, we didn't.
    replies(2): >>45096119 #>>45096987 #
    3. tavavex ◴[] No.45094762[source]
    > When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

    This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.

    But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"

    Most people don't care and don't want to care.

    replies(1): >>45096203 #
    4. bboygravity ◴[] No.45095823[source]
    Hi, I'm from 1 year in the future. None of what you typed applies anymore.
    replies(2): >>45096395 #>>45101244 #
    5. ljm ◴[] No.45096119[source]
    Have had some conversations where the other person goes into chatgpt to answer a question while I’m in the process of explaining a solution, and then says “GPT says this, look…”

    Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.

    If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.

    6. Aurornis ◴[] No.45096203[source]
    They’ll get there. Tech people have been exposed to it longer. They’ve been around long enough to see people embarrassed by LLM hallucinations.

    For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.

    replies(2): >>45096376 #>>45107709 #
    7. simonw ◴[] No.45096376{3}[source]
    If anything, I expect this to get worse.

    The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.

    I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.

    replies(1): >>45096573 #
    8. crashabr ◴[] No.45096395[source]
    I think you messed up something with your time-travelling setup. We're in the timeline where GPT5 did not become the all powerful sentient AI that Ai boosters promised us. Which timeline are you from?
    replies(1): >>45097173 #
    9. LtWorf ◴[] No.45096573{4}[source]
    I think it's more that google is getting considerably worse
    10. novok ◴[] No.45096892[source]
    LLM text walls are the new pasting a google or wikipedia result link, just more annoying
    replies(1): >>45098765 #
    11. buu700 ◴[] No.45096987[source]
    I don't have an issue quoting LLMs in and of itself, but the context and how you present it both matter.

    "ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.

    Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.

    12. tough ◴[] No.45097173{3}[source]
    GPT-6 will save us!
    replies(1): >>45097467 #
    13. lioeters ◴[] No.45097467{4}[source]
    Hi, I'm from 2 years in the future. Stop this before it's too late, GPT-7 will enslave humanity and aaargh..
    replies(1): >>45097854 #
    14. tough ◴[] No.45097854{5}[source]
    but we get time travel into the past???
    replies(1): >>45098059 #
    15. lioeters ◴[] No.45098059{6}[source]
    Time travel was a minor side effect of achieving AGI. We got bigger problems now in the future, something to do with the multiverse, aargh..
    16. grg0 ◴[] No.45098765[source]
    In the old days, when somebody asked a stupid question on a chat/forum that was just a search away, you would link them to "let me google it for you" (site seems down, but there is now a "let me google that for you"), where it'd take the search query in the URL and display an animation of typing the search in the box and clicking the "search" button.

    Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.

    replies(1): >>45098996 #
    17. rafram ◴[] No.45098996{3}[source]
    (It’s always been Let Me Google That For You.)
    replies(1): >>45111414 #
    18. imtringued ◴[] No.45101244[source]
    Why would people need discord if they can just talk to the AI directly?
    replies(1): >>45104081 #
    19. Rohansi ◴[] No.45104081{3}[source]
    Because arguing with people who are wrong on the internet. It's no fun doing the same with an LLM because you're either actually wrong or it will assume you're right without putting up a fight
    20. fennecbutt ◴[] No.45107709{3}[source]
    No, I don't believe they will.
    21. grg0 ◴[] No.45111414{4}[source]
    I am getting old.