←back to thread

507 points martinald | 3 comments | | HN request time: 0s | source
Show context
JCM9 ◴[] No.45051717[source]
These articles (of which there are many) all make the same basic accounting mistakes. You have to include all the costs associated with the model, not just inference compute.

This article is like saying an apartment complex isn’t “losing money” because the monthly rents cover operating costs but ignoring the cost of the building. Most real estate developments go bust because the developers can’t pay the mortgage payment, not because they’re negative on operating costs.

If the cash flow was truly healthy these companies wouldn’t need to raise money. If you have healthy positive cash flow you have much better mechanisms available to fund capital investment other than selling shares at increasingly inflated valuations. Eg issue a bond against that healthy cash flow.

Fact remains when all costs are considered these companies are losing money and so long as the lifespan of a model is limited it’s going to stay ugly. Using that apartment building analogy it’s like having to knock down and rebuild the building every 6 months to stay relevant, but saying all is well because the rents cover the cost of garbage collection and the water bill. That’s simply not a viable business model.

Update Edit: A lot of commentary below re the R&D and training costs and if it’s fair to exclude that on inference costs or “unit economics.” I’d simply say inference is just selling compute and that should be high margin, which the article concludes it is. The issue behind the growing concerns about a giant AI bubble is if that margin is sufficient to cover the costs of everything else. I’d also say that excluding the cost of the model from “unit economics” calculations doesn’t make business/math/economics since it’s literally the thing being sold. It’s not some bit of fungible equipment or long term capital expense when they become obsolete after a few months. Take away the model and you’re just selling compute so it’s really not a great metric to use to say these companies are OK.

replies(17): >>45051757 #>>45051787 #>>45051841 #>>45051851 #>>45051914 #>>45052000 #>>45052124 #>>45052133 #>>45052139 #>>45052319 #>>45052370 #>>45052582 #>>45052624 #>>45052648 #>>45052702 #>>45053815 #>>45054029 #
1. jsnell ◴[] No.45052319[source]
For the top few providers, the training is getting amortized over absurd amount of inference. E.g. Google recently mentioned that they processed 980T tokens over all surfaces in June 2025.

The leaked OpenAI financial projections for 2024 showed about equal amount of money spent on training and inference.

Amortizing the training per-query really doesn't meaningfully change the unit economics.

> Fact remains when all costs are considered these companies are losing money and so long as the lifespan of a model is limited it’s going to stay ugly. Using that apartment building analogy it’s like having to knock down and rebuild the building every 6 months to stay relevant. That’s simply not a viable business model.

To the extent they're losing money, it's because they're giving free service with no monetizaton to a billion users. But since the unit costs are so low, monetizing those free users with ads will be very lucrative the moment they decide to do so.

replies(1): >>45054761 #
2. overgard ◴[] No.45054761[source]
Assuming users accept those ads. Like, would they make it clear with a "sponsored section", or would they just try to worm it into the output? I could see a lot of potential ways that users reject the ad service, especially if it's seen to compromise the utility or correctness of the output.
replies(1): >>45058111 #
3. jsnell ◴[] No.45058111[source]
Billions of people use Google, YouTube, Facebook, Tiktok, Instagram, etc and accept the ads. Getting similar ad rates would make OpenAI fabulously profitable. They have no need to start with ad formats that might be rejected by users. Even if that were the intended endgame, you'd want to boil the frog for years.