←back to thread

387 points reaperducer | 7 comments | | HN request time: 0.929s | source | bottom
Show context
nova22033 ◴[] No.45772702[source]
Related

https://www.theregister.com/2025/10/29/microsoft_earnings_q1...

Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter

replies(4): >>45772839 #>>45772875 #>>45773834 #>>45774300 #
guywithahat ◴[] No.45772839[source]
It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.

I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets

replies(12): >>45772879 #>>45773109 #>>45773135 #>>45773192 #>>45773193 #>>45773301 #>>45773367 #>>45773571 #>>45773718 #>>45773861 #>>45774054 #>>45774102 #
randomNumber7 ◴[] No.45772879[source]
The winner takes it all, so it is reasonable to bet big to be the one.
replies(5): >>45772975 #>>45772993 #>>45772998 #>>45773006 #>>45773409 #
anonymousiam ◴[] No.45772975[source]
The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.
replies(4): >>45773079 #>>45773151 #>>45773292 #>>45773477 #
1. Workaccount2 ◴[] No.45773151[source]
The goal isn't to be the best LLM, the goal is to be the first self-improving LLM.

On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.

replies(3): >>45773287 #>>45773422 #>>45774116 #
2. kurisufag ◴[] No.45773287[source]
The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.

The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.

It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.

replies(1): >>45773402 #
3. forgetfulness ◴[] No.45773402[source]
The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.

There’s no guarantee that the singularity makes economic sense for humans.

replies(1): >>45773968 #
4. adastra22 ◴[] No.45773422[source]
Maybe in paper, but only on paper. There are so many half baked assumptions in that self-improvement logic.
5. kurisufag ◴[] No.45773968{3}[source]
Presuming the kind of runaway superintelligence people usually discuss, the sort with agency, this just turns into a boxing problem.

Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?

6. weregiraffe ◴[] No.45774116[source]
Self-improving LLM is as probable as a perpetual motion machine.

Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.

Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.

replies(1): >>45774858 #
7. Marha01 ◴[] No.45774858[source]
Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.

The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.