←back to thread

387 points reaperducer | 1 comments | | HN request time: 0.204s | source
Show context
nova22033 ◴[] No.45772702[source]
Related

https://www.theregister.com/2025/10/29/microsoft_earnings_q1...

Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter

replies(4): >>45772839 #>>45772875 #>>45773834 #>>45774300 #
guywithahat ◴[] No.45772839[source]
It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.

I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets

replies(12): >>45772879 #>>45773109 #>>45773135 #>>45773192 #>>45773193 #>>45773301 #>>45773367 #>>45773571 #>>45773718 #>>45773861 #>>45774054 #>>45774102 #
randomNumber7 ◴[] No.45772879[source]
The winner takes it all, so it is reasonable to bet big to be the one.
replies(5): >>45772975 #>>45772993 #>>45772998 #>>45773006 #>>45773409 #
anonymousiam ◴[] No.45772975[source]
The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.
replies(4): >>45773079 #>>45773151 #>>45773292 #>>45773477 #
Workaccount2 ◴[] No.45773151[source]
The goal isn't to be the best LLM, the goal is to be the first self-improving LLM.

On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.

replies(3): >>45773287 #>>45773422 #>>45774116 #
weregiraffe ◴[] No.45774116[source]
Self-improving LLM is as probable as a perpetual motion machine.

Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.

Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.

replies(1): >>45774858 #
1. Marha01 ◴[] No.45774858[source]
Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.

The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.