Most active commenters
  • DenisM(3)

←back to thread

387 points reaperducer | 18 comments | | HN request time: 0.675s | source | bottom
Show context
jacquesm ◴[] No.45772081[source]
These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.
replies(6): >>45772090 #>>45772213 #>>45772293 #>>45772318 #>>45772433 #>>45774073 #
zemvpferreira ◴[] No.45772318[source]
There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.

OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.

replies(13): >>45772378 #>>45772392 #>>45772490 #>>45772554 #>>45772661 #>>45772731 #>>45772738 #>>45772759 #>>45773088 #>>45773089 #>>45773096 #>>45773105 #>>45774229 #
1. Arkhaine_kupo ◴[] No.45772378[source]
> they’re using their equity to buy compute that is critical to improving their core technology

But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.

The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice

replies(4): >>45772556 #>>45772953 #>>45773865 #>>45775942 #
2. J_McQuade ◴[] No.45772556[source]
There is something about an argument made almost entirely out of metaphors that amuses me to the point of not being able to take it seriously, even if I actually agree with it.
replies(1): >>45772683 #
3. powerhouse007 ◴[] No.45772683[source]
As much as I dislike metaphors, this sounded reasonable to me. Just don't go poking holes in the metaphor instead of the real argument.
replies(1): >>45773054 #
4. DenisM ◴[] No.45772953[source]
OpenAI invests heavily into integration with other products. If model development stalls they just need to be not worse than other stalled models while taking advantage of brand recognition and momentum to stay ahead in other areas.

In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.

Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.

replies(2): >>45773037 #>>45773189 #
5. otherjason ◴[] No.45773037[source]
But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?
replies(3): >>45773432 #>>45773992 #>>45774565 #
6. gilleain ◴[] No.45773054{3}[source]
Indeed, poking holes in the metaphor is like putting a pin in a balloon, rather than knocking it out of the park by addressing the real argument.
7. chii ◴[] No.45773189[source]
The bigger threat is if their models "stall", while a new up-start discovers an even better model/training method.

What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.

replies(1): >>45774313 #
8. accrual ◴[] No.45773432{3}[source]
I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.

The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.

9. brokencode ◴[] No.45773865[source]
Yeah, except you can keep on squeezing these lemons for a long time before they run out of juice.

Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.

The models are already useful for many applications, and they are being integrated into more business and consumer products every day.

Adoption is what will turn the flywheel into a rocket.

replies(1): >>45774862 #
10. camdenreslink ◴[] No.45773992{3}[source]
If model development stalls, then the open weight free models will eventually totally catch up. The model itself will become a complete commodity.
replies(1): >>45774937 #
11. DenisM ◴[] No.45774313{3}[source]
They can also buy out the startup or match the development by hiring more people. Their comp packages are very competitive.
12. vineyardmike ◴[] No.45774565{3}[source]
Because they’re not that wildly unprofitable. Yes, obviously the companies spend a ton of money on training, but several have said that each model is independently “profitable” - the income from selling access to the model has overcome the costs of training it. It’s just that revenues haven’t overcome the cost of training the next one, which gets bigger every time.
replies(1): >>45774864 #
13. mentalgear ◴[] No.45774862[source]
Well, the thing is that that kind of hardware chips quickly decrease in value. It's not like the billions spend in past bubbles like the 2000s where internet infrastructure was build (copper, fibre) or even during 1950s where transport infrastructure (roads) were build.
replies(1): >>45775940 #
14. alangibson ◴[] No.45774864{4}[source]
> the income from selling access to the model has overcome the costs of training it.

Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.

replies(1): >>45777843 #
15. DenisM ◴[] No.45774937{4}[source]
It very well might. The ones with most smooth integrations and applications will win.

This can go either way. For databases open source integration tools prevailed, the commercial activity left hosting those tools.

But enterprise software integration that might end up mostly proprietary.

16. brokencode ◴[] No.45775940{3}[source]
Data centers are massive infrastructural investments similar to roads and rails. They are not just a bunch of chips duct taped together, but large buildings with huge power and networking requirements.

Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.

All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.

They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.

17. sidewndr46 ◴[] No.45775942[source]
There's at least one contributor here on HN that believes growth in models is strictly exponential: https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
18. JohnnyMarcone ◴[] No.45777843{5}[source]
I've also seen Open AI and Anthropic say it's pretty close at least. I'll try to follow up with a source.