Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?
Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.
As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.
VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.
That's a big difference
The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.
The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.
But it's NOT a person when it's time to 'tell the AI' that you have its puppy in a box filled with spikes and for every mistake it makes you will stab it with the spikes a little more and tell it the reactions of the puppy. That becomes normal, if it elicits a slightly more desperate 'person' out of the AI for producing work.
At which point the meat-people who've taught themselves to normalize this workflow can decide that opponents of AI are clearly so broken in the head as to constitute non-player characters (see: useful memes to that effect) and therefore are NOT people: and so, it would be good to get rid of the non-people muddying up the system (see: human history)
Told you it gets worse. And all the while, the language models are sort of blameless, because there's nobody there. Torturing an LLM to elicit responses is harming a person, but it's the person constructing the prompts, not a hypothetical victim somewhere in the clouds of nobody.
All that happens is a human trains themselves to dehumanize, and the LLM thing is a recipe for doing that AT SCALE.
Great going, guys.