LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.
Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.
A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.
And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.
Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.