https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
They want to be the Google in this scenario.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
I found there was more than just couches on the WeWork private jets:
https://www.inverse.com/input/tech/weworks-adam-neumann-got-...
On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
There’s no guarantee that the singularity makes economic sense for humans.
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
https://www.youtube.com/watch?v=_zfN9wnPvU0
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
Add to this that AGI is impossible with LLMs...
Fascinating! I unearthed the TL;DR for anyone else interested:
* WeWork purchased a $60 million Gulfstream G650ER private jet for Neumann's use.
* The G650ER was customized with two bedrooms and a conference table.
* Neumann used the jet extensively for global travel, meetings, and family trips.
* The jet was also used to transport items like a "sizable chunk" of marijuana in a cereal box, which might be worse and more negligent than couches.
Sources:
https://www.vanityfair.com/hollywood/2022/03/adam-neumann-re...
https://nypost.com/2021/07/17/the-shocking-ways-weworks-ex-c...
------
What's crazy is that with the 2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
1. Performance of AI tools improving but marginally so in practice 2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?
ChatGPT was mind blowing when you first used it. WeWork is a real estate play fronted by a self aggrandizing self dealing CEO.
Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.
Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.
There was a point where because of Tesla's enormous profits, it was seen as ok for Rivian to lose that much in a year, which was incredible because it's about the same amount of money Tesla lost during its entire tenure as a public company. You're right though they've been criticized for it and have paid the (stock) price for it.
In a similar vein, LLM's/AI are clearly impressive technologies that can be done profitably. Spending billions on a model however may not be economically feasible. It's a great example of runaway spending, whereas the weed thing feels more along the lines of a drug problem to me.