https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
There’s no guarantee that the singularity makes economic sense for humans.
Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.
Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.
The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.