I think that there is a bubble but it's shaped more like the web bubble and less like the crypto bubble.
I don't LLM capacities have to reach human-equivalent for their uses to multiply for years to come.
I don't LLM technology as it exists can reach AGI by the simple addition of more compute power and moreover, I don't think adding computer necessarily is going to provide proportionate benefit (indeed, someone pointed-out that the current talent race acknowledges that brute-force has likely had it's day and some other "magic" is needed. Unlike brute-force, technical advances can't be summoned at will).
Regarding LLMs there are two concerns - current products don't have any killer feature to lock in customers, so people can easily jump ship. And diminishing returns, if there won't be a clear progress with models, then free/small, maybe even local models will fill most of people needs.
People are speculating that even OAI is burning more money than they make, it's hard to say what will happen if customer churn will increase. Like for example me - I never paid for LLMs specifically, and didn't use them in any major way, but I used free Claude for testing how it works, maybe incorporating in the workflow. I may transitioned to the paid tier in the future. But recently someone noted that Google cloud storage includes "free" Gemini Pro and I've switched to it, because why not, I'm already paying for the storage part. And there was nothing keeping me with Anthropic. Actually that name alone is revolting imo. I wrote this as an example that when monsters like Google or Microsoft or Apple would start bundling their solutions (and advertise them properly, unlike Google), then specialized companies including OAI will feel very very bad, with their insane expenses and investments.
I think overstating their broad-ness is core to the hype-cycle going on. Everyone wants to believe—or wants a buyer to believe—that a machine which can grow documents about X is just as good (and reliable) as actually creating X.
If that's a genuine question: Facebooks sells ads, information and influence (eg. to political parties). It's a very profitable enterprise. In 2024 Meta made $164B in revenue, and they're still growing at ~16% year-over-year.
[0] https://investor.atmeta.com/investor-news/press-release-deta...
There are still massive gains to be had from scaling up - but frontier training runs have converged on "about the largest model that we can fit into our existing hardware for training and inference". Going bigger than that comes with non-linear cost increases. The next generations of AI hardware are expected to push that envelope.
The reason why major AI companies prioritize things like reasoning modes and RLVR over scaling the base models up is that reasoning and RLVR give real world performance gains cheaper and faster. Once scaling up becomes cheaper, or once the gains you can squeeze out of RLVR deplete, they'll get back to scaling up once again.
A machine which can define a valid CAD document can get the actual product built (even if the building requires manual assembly).
Its Meta now, and they own alot of "brands" besides Facebook. Instagram, WhatsApp, Oculus, Giphy, etc.