←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 1 comments | | HN request time: 0.339s | source
Show context
delichon ◴[] No.44567913[source]
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
replies(17): >>44567949 #>>44567951 #>>44567961 #>>44567992 #>>44568002 #>>44568006 #>>44568029 #>>44568031 #>>44568040 #>>44568057 #>>44568062 #>>44568090 #>>44568323 #>>44568376 #>>44568565 #>>44569900 #>>44574150 #
rafaelmn ◴[] No.44568029[source]
If you claimed that AI was inevitable in the 80s and invested, or claimed people would be inevitably moving to VR 10 years ago - you would be shit out of luck. Zuck is still burning billions on it with nothing to show for it and a bad outlook. Even Apple tried it and hilariously missed the demand estimate. The only potential bailout for this tech is AR, but thats still years away from consumer market and widespread adoption, and probably will have very little to do with shit that is getting built for VR, because its a completely different experience. But I am sure some of the tech/UX will carry over.

Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?

Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.

replies(6): >>44568330 #>>44568622 #>>44568907 #>>44574172 #>>44580115 #>>44580141 #
gbalduzzi ◴[] No.44568330[source]
None of the "failed" innovations you cited were even near the adoption rate of current LLMs.

As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.

VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.

That's a big difference

replies(5): >>44568501 #>>44568566 #>>44568888 #>>44570634 #>>44573465 #
fzeroracer ◴[] No.44568888[source]
> None of the "failed" innovations you cited were even near the adoption rate of current LLMs.

The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.

The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.

replies(1): >>44569865 #
1. Applejinx ◴[] No.44569865[source]
Oh, it gets worse. The next stage is sort of a dual mode of personhood: AI is 'person' when it's about impeding the constant use of LLMs for all things, so it becomes anathema to deny the basic superhumanness of the AI.

But it's NOT a person when it's time to 'tell the AI' that you have its puppy in a box filled with spikes and for every mistake it makes you will stab it with the spikes a little more and tell it the reactions of the puppy. That becomes normal, if it elicits a slightly more desperate 'person' out of the AI for producing work.

At which point the meat-people who've taught themselves to normalize this workflow can decide that opponents of AI are clearly so broken in the head as to constitute non-player characters (see: useful memes to that effect) and therefore are NOT people: and so, it would be good to get rid of the non-people muddying up the system (see: human history)

Told you it gets worse. And all the while, the language models are sort of blameless, because there's nobody there. Torturing an LLM to elicit responses is harming a person, but it's the person constructing the prompts, not a hypothetical victim somewhere in the clouds of nobody.

All that happens is a human trains themselves to dehumanize, and the LLM thing is a recipe for doing that AT SCALE.

Great going, guys.