Most active commenters

    ←back to thread

    LLM Inevitabilism

    (tomrenner.com)
    1612 points SwoopsFromAbove | 15 comments | | HN request time: 1.074s | source | bottom
    Show context
    delichon ◴[] No.44567913[source]
    If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
    replies(17): >>44567949 #>>44567951 #>>44567961 #>>44567992 #>>44568002 #>>44568006 #>>44568029 #>>44568031 #>>44568040 #>>44568057 #>>44568062 #>>44568090 #>>44568323 #>>44568376 #>>44568565 #>>44569900 #>>44574150 #
    rafaelmn ◴[] No.44568029[source]
    If you claimed that AI was inevitable in the 80s and invested, or claimed people would be inevitably moving to VR 10 years ago - you would be shit out of luck. Zuck is still burning billions on it with nothing to show for it and a bad outlook. Even Apple tried it and hilariously missed the demand estimate. The only potential bailout for this tech is AR, but thats still years away from consumer market and widespread adoption, and probably will have very little to do with shit that is getting built for VR, because its a completely different experience. But I am sure some of the tech/UX will carry over.

    Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?

    Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.

    replies(6): >>44568330 #>>44568622 #>>44568907 #>>44574172 #>>44580115 #>>44580141 #
    1. gbalduzzi ◴[] No.44568330[source]
    None of the "failed" innovations you cited were even near the adoption rate of current LLMs.

    As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.

    VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.

    That's a big difference

    replies(5): >>44568501 #>>44568566 #>>44568888 #>>44570634 #>>44573465 #
    2. weatherlite ◴[] No.44568501[source]
    > They can become even better, but even if they don't there are plenty of use cases for them.

    If they don't become better we are left with a big but not huge change. Productivity gains of around 10 to 20 percent in most knowledge work. That's huge for sure but in my eyes the internet and pc revolution before that were more transformative than that. If LLMs become better, get so good they replace huge chunks of knowledge workers and then go out to the physical world then yeah ...that would be the fastest transformation of the economy in history imo.

    replies(2): >>44569341 #>>44579489 #
    3. alternatex ◴[] No.44568566[source]
    The other inventions would have quite the adoption rate if they were similarly subsidized as current AI offerings. It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.
    replies(4): >>44568631 #>>44569806 #>>44570375 #>>44576561 #
    4. ascorbic ◴[] No.44568631[source]
    The lack of adoption for those wasn't (just) the price. They just weren't very useful.
    5. fzeroracer ◴[] No.44568888[source]
    > None of the "failed" innovations you cited were even near the adoption rate of current LLMs.

    The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.

    The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.

    replies(1): >>44569865 #
    6. TeMPOraL ◴[] No.44569341[source]
    FWIW, LLMs have been getting better so fast that we only barely begun figuring out more advanced ways of applying them. Even if they were to plateau right now, there'd still be years of improvements coming from different ways of tuning, tweaking, combining, chaining and applying them - which we don't invest much into today, because so far it's been cheaper to wait a couple months for the next batch of models that can handle what previous could not.
    7. a_wild_dandan ◴[] No.44569806[source]
    > The other inventions would have quite the adoption rate if they were similarly subsidized as current AI offerings.

    No, they wouldn't. The '80s saw obscene investment in AI (then "expert systems") and yet nobody's mom was using it.

    > It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.

    It's especially hard to compare since it's often those financially stable businesses doing said investments (Microsoft, Google, etc).

    ---

    Aside: you know "the customer is always right [in matters of taste]"? It's been weirdly difficult getting bosses to understand the brackets part, and HN folks the first part.

    replies(2): >>44574479 #>>44575992 #
    8. Applejinx ◴[] No.44569865[source]
    Oh, it gets worse. The next stage is sort of a dual mode of personhood: AI is 'person' when it's about impeding the constant use of LLMs for all things, so it becomes anathema to deny the basic superhumanness of the AI.

    But it's NOT a person when it's time to 'tell the AI' that you have its puppy in a box filled with spikes and for every mistake it makes you will stab it with the spikes a little more and tell it the reactions of the puppy. That becomes normal, if it elicits a slightly more desperate 'person' out of the AI for producing work.

    At which point the meat-people who've taught themselves to normalize this workflow can decide that opponents of AI are clearly so broken in the head as to constitute non-player characters (see: useful memes to that effect) and therefore are NOT people: and so, it would be good to get rid of the non-people muddying up the system (see: human history)

    Told you it gets worse. And all the while, the language models are sort of blameless, because there's nobody there. Torturing an LLM to elicit responses is harming a person, but it's the person constructing the prompts, not a hypothetical victim somewhere in the clouds of nobody.

    All that happens is a human trains themselves to dehumanize, and the LLM thing is a recipe for doing that AT SCALE.

    Great going, guys.

    9. Nebasuke ◴[] No.44570375[source]
    They really wouldn't. Even people who BOUGHT VR, are barely using it. Giving everyone free VR headsets won't make people suddenly spend a lot of time in VR-land without there actually being applications that are useful to most people.

    ChatGPT is so useful, people without any technology background WANT to use it. People who are just about comfortable with the internet, see the applications and use it to ask questions (about recipes, home design, solving small house problems, etc).

    10. techpineapple ◴[] No.44570634[source]
    I don’t see this as that big a difference, of course AI/LLMs are here to stay, but the hundreds in billions of bets on LLMs don’t assume linear growth.
    11. rafaelmn ◴[] No.44573465[source]
    OK but what does adoption rate vs. real world impact tell here ?

    With all the insane exposure and downloads how many people cant even be convinced to pay 20$/month for it ? The value proposition to most people is that low. So you are basically betting on LLMs making a leap in performance to pay for the investments.

    12. dmbche ◴[] No.44574479{3}[source]
    I don't think you understand the relative amounts of capital invested in LLMs compared to expert systems in the 80s.

    And those systems were never "commodified" - your average mom is forcefully exposed to LLMs with every google search, can interact with LLMs for free instantly anywhere in the world - and we're comparing to a luxury product for nerds basically?

    Not to forget that those massive companies are also very heavy in advertising - I don't think your average mom in the 80s heard of those systems multiple times a day, from multiple aquaintances AND social media and news outlets.

    13. ben_w ◴[] No.44575992{3}[source]
    > Aside: you know "the customer is always right [in matters of taste]"? It's been weirdly difficult getting bosses to understand the brackets part, and HN folks the first part.

    Something I struggle to internalise, even though I know it in theory.

    Customers can't be told they're wrong, and the parenthetical I've internalised, but for non-taste matters they can often be so very wrong, so often… I know I need to hold my tongue even then owing to having merely nerd-level charisma, but I struggle to… also owing to having merely nerd-level charisma.

    (And that's one of three reasons why I'm not doing contract work right now).

    14. elevatortrim ◴[] No.44576561[source]
    Most people are using LLMs because they fear that it will be the future and they will miss out if they do not learn it now although they are aware they are not more productive but can’t say that in a business environment.
    15. guappa ◴[] No.44579489[source]
    > Productivity gains of around 10 to 20 percent in most knowledge work.

    Wasn't there a recent study that showed people perceived a 20% increase while the clock showed a 20% decrease?