←back to thread

507 points martinald | 7 comments | | HN request time: 0.797s | source | bottom
Show context
JCM9 ◴[] No.45051717[source]
These articles (of which there are many) all make the same basic accounting mistakes. You have to include all the costs associated with the model, not just inference compute.

This article is like saying an apartment complex isn’t “losing money” because the monthly rents cover operating costs but ignoring the cost of the building. Most real estate developments go bust because the developers can’t pay the mortgage payment, not because they’re negative on operating costs.

If the cash flow was truly healthy these companies wouldn’t need to raise money. If you have healthy positive cash flow you have much better mechanisms available to fund capital investment other than selling shares at increasingly inflated valuations. Eg issue a bond against that healthy cash flow.

Fact remains when all costs are considered these companies are losing money and so long as the lifespan of a model is limited it’s going to stay ugly. Using that apartment building analogy it’s like having to knock down and rebuild the building every 6 months to stay relevant, but saying all is well because the rents cover the cost of garbage collection and the water bill. That’s simply not a viable business model.

Update Edit: A lot of commentary below re the R&D and training costs and if it’s fair to exclude that on inference costs or “unit economics.” I’d simply say inference is just selling compute and that should be high margin, which the article concludes it is. The issue behind the growing concerns about a giant AI bubble is if that margin is sufficient to cover the costs of everything else. I’d also say that excluding the cost of the model from “unit economics” calculations doesn’t make business/math/economics since it’s literally the thing being sold. It’s not some bit of fungible equipment or long term capital expense when they become obsolete after a few months. Take away the model and you’re just selling compute so it’s really not a great metric to use to say these companies are OK.

replies(17): >>45051757 #>>45051787 #>>45051841 #>>45051851 #>>45051914 #>>45052000 #>>45052124 #>>45052133 #>>45052139 #>>45052319 #>>45052370 #>>45052582 #>>45052624 #>>45052648 #>>45052702 #>>45053815 #>>45054029 #
1. lkjdsklf ◴[] No.45052133[source]
It’s fun to work backwards, but i was listening to a podcast where the journalists were talking about a dinner that Sam Altman had.

This question came up and Sam said they were profitable if you exclude training and the COO corrected him

So at least for OpenAI, the answer is “no”

They did say it was close

And that’s if you exclude training costs which is kind of absurd because it’s not like you can stop training

replies(3): >>45052612 #>>45052684 #>>45052749 #
2. nixgeek ◴[] No.45052612[source]
Excluding training two of their biggest costs will be payroll and inferencing for all the free users.

It’s therefore interesting that they claimed it was close: this supports the theory inferencing from paid users is a (big) money maker if it’s close to covering all the free usage and their payroll costs?

3. JimDabell ◴[] No.45052684[source]
There’s no mention of that in this article about it:

https://archive.is/wZslL

They quote him as saying inference is profitable and end it at that.

Are you saying that the COO corrected him at the dinner, or on the podcast? Which podcast was it?

replies(1): >>45053133 #
4. topaz0 ◴[] No.45052749[source]
Worth noting that the post only claims they should be profitable for the inference of their paying customers on a guesstimated typical workload. Free users and users with atypical usage patterns will obviously skew the whole picture. So the argument in the post is at least compatible with them still losing money on inference overall.
5. Barbing ◴[] No.45053133[source]
From a journalist at the dinner:

“I think that tends to end poorly because as demand for your service grows, you lose more and more money. Sam Altman actually addressed this at dinner. He was asked basically, are you guys losing money every time someone uses ChatGPT?

And it was funny. At first, he answered, no, we would be profitable if not for training new models. Essentially, if you take away all the stuff, all the money we're spending on building new models and just look at the cost of serving the existing models, we are sort of profitable on that basis.

And then he looked at Brad Lightcap, who is the COO, and he sort of said, right? And Brad kind of like squirmed in his seat a little bit and was like, well, we're pretty close.

We're pretty close. We're pretty close.

So to me, that suggests that there is still some, maybe small negative unit economics on the usage of ChatGPT. Now, I don't know whether that's true for other AI companies, but I think at some point, you do have to fix that because as we've seen for companies like Uber, like MoviePass, like all these other sort of classic examples of companies that were artificially subsidizing the cost of the thing that they were providing to consumers, that is not a recipe for long-term success.”

From Hard Fork: Is This an A.I. Bubble? + Meta’s Missing Morals + TikTok Shock Slop, Aug 22, 2025

replies(2): >>45053197 #>>45053388 #
6. JimDabell ◴[] No.45053197{3}[source]
Thanks!
7. est31 ◴[] No.45053388{3}[source]
GPT-5 was I suppose their attempt to make a product that provides as good metrics as their earlier products.

Uber doesn't really compare, as they had existing competition from taxi companies that they first had to/have to destroy. And cars or fuel didn't get 10x cheaper over the time of Uber's existence, but I'm sure that they still can optimize a lot for efficiency.

I'm more worried about OpenAIs capability to build a good moat. Right now it seems that each success is replicated by the competing companies quickly. Each month there is a new leader in the benchmarks. Maybe the moat will be the data in the end, i.e. there is barriers nowadays to crawl many websites that have lots of text. Meanwhile they might make agreements with the established AI players, maybe some of those agreements will be exclusive. Not just for training but also for updating wrt world news.