I think that there is a bubble but it's shaped more like the web bubble and less like the crypto bubble.
I don't LLM capacities have to reach human-equivalent for their uses to multiply for years to come.
I don't LLM technology as it exists can reach AGI by the simple addition of more compute power and moreover, I don't think adding computer necessarily is going to provide proportionate benefit (indeed, someone pointed-out that the current talent race acknowledges that brute-force has likely had it's day and some other "magic" is needed. Unlike brute-force, technical advances can't be summoned at will).
I think overstating their broad-ness is core to the hype-cycle going on. Everyone wants to believe—or wants a buyer to believe—that a machine which can grow documents about X is just as good (and reliable) as actually creating X.
There are still massive gains to be had from scaling up - but frontier training runs have converged on "about the largest model that we can fit into our existing hardware for training and inference". Going bigger than that comes with non-linear cost increases. The next generations of AI hardware are expected to push that envelope.
The reason why major AI companies prioritize things like reasoning modes and RLVR over scaling the base models up is that reasoning and RLVR give real world performance gains cheaper and faster. Once scaling up becomes cheaper, or once the gains you can squeeze out of RLVR deplete, they'll get back to scaling up once again.
A machine which can define a valid CAD document can get the actual product built (even if the building requires manual assembly).