Most active commenters

    ←back to thread

    378 points todsacerdoti | 23 comments | | HN request time: 1.066s | source | bottom
    Show context
    xnorswap ◴[] No.44984684[source]
    I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

    It didn't help that the LLM was confidently incorrect.

    The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

    In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

    I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

    With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

    replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
    1. BitwiseFool ◴[] No.44984808[source]
    >"It didn't help that the LLM was confidently incorrect."

    Has anyone else ever dealt with a somewhat charismatic know-it-all who knows just enough to give authoritative answers? LLM output often reminds me of such people.

    replies(7): >>44984914 #>>44985008 #>>44985013 #>>44985034 #>>44985093 #>>44985184 #>>44985564 #
    2. bigfishrunning ◴[] No.44984914[source]
    If those people are wrong enough times, they are either removed from the organization or they scare anyone competent away from the organization, which then dies. LLMs seem to be getting a managerial pass (because the cost is subsidized by mountains of VC money and thus very low (for now)) so only the latter outcome is likely.
    3. XxiXx ◴[] No.44985008[source]
    There's even a name for such person: Manager
    4. SoftTalker ◴[] No.44985013[source]
    Yes, they have been around forever, they are known as bullshitters.

    The bullshitter doesn't care whether what he says is correct or not, as long as it's convincing.

    https://en.wikipedia.org/wiki/On_Bullshit

    5. pmarreck ◴[] No.44985034[source]
    Sounds like every product manager I've ever had, lol (sorry PM's!)
    6. DamnInteresting ◴[] No.44985093[source]
    Colloquially known as "bullshitters."[1]

    [1] https://dictionary.cambridge.org/us/dictionary/english/bulls...

    7. SamBam ◴[] No.44985184[source]
    That’s a great question — and one that highlights a subtle misconception about how LLMs actually work.

    At first glance, it’s easy to compare them to a charismatic “know-it-all” who sounds confident while being only half-right. After all, both can produce fluent, authoritative-sounding answers that sometimes miss the mark. But here’s where the comparison falls short — and where LLMs really shine:

    (...ok ok, I can't go on.)

    replies(3): >>44985537 #>>44986329 #>>44988471 #
    8. ryandrake ◴[] No.44985537[source]
    Most of the most charismatic, confident know-it-alls I have ever met have been in the tech industry. And not just the usual suspects (founders, managers, thought leaders, architects) but regular rank-and-file engineers. The whole industry is infested with know-it-alls. Hell, HN is infested with know-it-alls. So it's no surprise that one of the biggest products of the decade is an Automated Know-It-All machine.
    replies(1): >>44986836 #
    9. fluoridation ◴[] No.44985564[source]
    I'm pretty sure I'm that guy on some topics.
    replies(1): >>44989187 #
    10. mwigdahl ◴[] No.44986329[source]
    Perfect! You really got to the core of the matter! The only thing I noticed is that your use of the em-dash needs to not be bracketed with spaces on either end. LLMs—as recommended by most common style guides—stick to the integrated style that treats the em-dash as part of the surrounding words.
    replies(1): >>44986805 #
    11. matt_kantor ◴[] No.44986805{3}[source]
    It bums me out that LLMs are ruining em dashes. I like em dashes and have used them for decades, but now I worry that when I do people will assume my writing is LLM output.

    What's next—the interrobang‽

    replies(2): >>44987878 #>>44995400 #
    12. flatb ◴[] No.44986836{3}[source]
    Thereby self correcting perhaps.
    replies(1): >>44995598 #
    13. lcnPylGDnU4H9OF ◴[] No.44987878{4}[source]
    I'm hoping it's not the semi-colon; I use that a lot.
    14. mvdtnz ◴[] No.44988471[source]
    This isn't funny or clever. Stop it.
    replies(1): >>44993895 #
    15. BitwiseFool ◴[] No.44989187[source]
    >"I'm pretty sure I'm that guy on some topics."

    The use of 'pretty sure' disqualifies you. I appreciate your humility.

    replies(1): >>44990157 #
    16. fluoridation ◴[] No.44990157{3}[source]
    I don't know, man. I really don't know. I can't tell whether I'm really good at making inferences from tidbits of information, or really good at speaking confidently.
    replies(1): >>44992112 #
    17. jrs235 ◴[] No.44992112{4}[source]
    I think I'm good at making inferences from tidbits of information (or so I think) but I don't think I'm good at speaking confidently, other than speaking confidently that I don't know everything.
    18. CursedSilicon ◴[] No.44993895{3}[source]
    You're absolutely right! It's totally unfair to tease LLM's like that—They're just trying to do the best with how they're programmed. We should treat them with the same respect we give each other so that we can create a better world for everyone
    replies(1): >>44995523 #
    19. HocusLocus ◴[] No.44995400{4}[source]
    LLMs are not 'ruining' em dashes. It's just a convenient device to unmask people who make critical judgements based on ridiculous and flimsy evidence.

    It is good they are being unmasked. You must avoid those people and warn your children about them. They are not safe to be around.

    replies(2): >>44995697 #>>45023547 #
    20. HocusLocus ◴[] No.44995523{4}[source]
    Difficult to see people anthropomorphize LLMs undeservedly, it's an extension of the childhood trauma inflicted by The Brave Little Toaster. Inanimate objects projected into personhood and subjected to a cruel and indifferent world.

    Dangerous actually, the effect it had on children. Of course they loved it because it had a happy ending, but at what price?

    21. sigotirandolas ◴[] No.44995598{4}[source]
    I'd say the opposite, LLMs are a know-it-nothing machine to perfectly suit know-it-alls. Unlike a human, it isn't that hard to get the machine to say what you want, and then generate enough crap to 'defeat' any human challenger.
    22. matt_kantor ◴[] No.44995697{5}[source]
    "Ruining" in the sense that "I worry that when I [use em dashes] people will assume my writing is LLM output".

    I'd feel the same if I was someone who naturally frequently used phrases like "you're absolutely right", or for a much more extreme analogy: if I was a Hindu living in Europe in the 1920s and then the Nazis came along and "ruined" the swastika for me.

    23. xnorswap ◴[] No.45023547{5}[source]
    People always crawl out of the wood-work swearing blind they've always used em-dash, but the truth is that actual em-dash usage has exploded by a factor of thousands and is therefore, along with other markers, a strong indicator.

    It's not proof, it is evidence.