Most active commenters
  • drob518(6)
  • dcre(5)
  • gomox(3)
  • DenisM(3)

←back to thread

507 points martinald | 24 comments | | HN request time: 0.019s | source | bottom
Show context
simonw ◴[] No.45054022[source]
https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat... quotes Sam Altman saying:

> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

replies(6): >>45054061 #>>45054069 #>>45054101 #>>45054102 #>>45054593 #>>45054858 #
1. drob518 ◴[] No.45054101[source]
Which is like saying, “If all we did is charge people money and didn’t have any COGS, we’d be a very profitable company.” That’s a truism of every business and therefore basically meaningless.
replies(3): >>45054218 #>>45054231 #>>45054405 #
2. ◴[] No.45054218[source]
3. dcre ◴[] No.45054231[source]
The Amodei quote in my other reply explains why this is wrong. The point is not to compare the training of the current model to inference on the current model. The thing that makes them lose so much money is that they are training the next model while making back their training cost on the current model. So it's not COGS at all.
replies(3): >>45054361 #>>45054385 #>>45055034 #
4. prasadjoglekar ◴[] No.45054361[source]
Well, only if the one training model continued to function as a going business. Their amortization window for the training cost is 2 months or so. They can't just keep that up and collect $.

They have to build the next model, or else people will go to someone else.

replies(1): >>45055004 #
5. ToucanLoucan ◴[] No.45054385[source]
So is OpenAI capable of not making a new model at some point? They've been training the next model continuously as long as they've existed AFAIK.

Our software house spends a lot on R&D sure, but we're still incredibly profitable all the same. If OpenAI is in a position where they effectively have to stop iterating the product to be profitable, I wouldn't call that a very good place to be when you're on the verge of having several hundred billion in debt.

replies(2): >>45055021 #>>45055515 #
6. gomox ◴[] No.45054405[source]
I can't imagine the hoops an accountant would have to go through to argue training cost is COGS. In the most obvious stick-figures-for-beginners interpretation, as in, "If I had to explain how a P&L statement works to an AI engineer", training is R&D cost and inference cost is COGS.
replies(2): >>45054450 #>>45055088 #
7. jgalt212 ◴[] No.45054450[source]
there's not a bright line there, though.
8. dcre ◴[] No.45055004{3}[source]
Why two months? It was almost a year between Claude 3.5 and 4. (Not sure how much it costs to go from 3.5 to 3.7.)
replies(2): >>45055861 #>>45055934 #
9. dcre ◴[] No.45055021{3}[source]
I think at that point there is strong financial pressure to figure out how to continuously evolve models instead of changing new ones, for example by building models out of smaller modules that can be trained individually and swapped out. Jeff Dean and Noam Shazeer talked about that a bit in their interview with Dwarkesh: https://www.dwarkesh.com/p/jeff-dean-and-noam-shazeer
10. drob518 ◴[] No.45055034[source]
So,if they stopped training they’d be profitable? Only in some incremental sense, ignoring all sunk costs.
11. drob518 ◴[] No.45055088[source]
I wasn’t using COGS in a GAAP sense, but rather as a synonym for unspecified “costs.” My bad. I suppose you would classify training as development and ongoing datacenter and GPU costs as actual GAAP COGS. My point was, if all you focus on is revenue and ignore the costs of creating your business and keeping it running, it’s pretty easy for any business to be “profitable.”
replies(2): >>45055468 #>>45067956 #
12. DenisM ◴[] No.45055468{3}[source]
It’s generally useful to consider unit economy separate from whole company. If your unit economy is negative thing are very bleak. If it’s positive, your chance are going up by a lot - scaling the business amortizes fixed (non-unit) costs, such as admin and R&D, and slightly improves unit margins as well.

However this does not work as well if your fixed (non-unit) cost is growing exponentially. You can’t get out of this unless your user base grows exponentially or the customer value (and price) per user grows exponentially.

I think this is what Altman is saying - this is an unusual situation: unit economy is positive but fixed costs are exploding faster than economy if scale can absorb it.

You can say it’s splitting hair, but insightful perspective often requires teasing things apart.

replies(1): >>45055996 #
13. DenisM ◴[] No.45055515{3}[source]
There’s still untapped value in deeper integrations. They might hit a jackpot of exponentially increasing value from network effects caused by tight integration with e.g. disjoint business processes.

We know that businesses with tight network effects can grow to about 2 trillion in valuation.

replies(1): >>45056034 #
14. Jalad ◴[] No.45055861{4}[source]
Even being generous, and saying it's a year, most capital expenditures depreciate over a period of 5-7 years. To state the obvious, training one model a year is not a saving grace
replies(1): >>45055983 #
15. oblio ◴[] No.45055934{4}[source]
Don't they need to accelerate that, though? Having a 1 year old model isn't really great, it's just tolerable.
replies(1): >>45056015 #
16. dcre ◴[] No.45055983{5}[source]
I don't understand why the absolute time period matters — all that matters is that you get enough time making money on inference to make up for the cost of training.
17. drob518 ◴[] No.45055996{4}[source]
It’s splitting a hair, but a pretty important hair. Does anyone think that models won’t need continuous retraining? Does anyone think models won’t continue to try to scale? Personally, I think we’re reaching diminishing returns with scaling, which is probably good because we’ve basically run out of content to train on, and so perhaps that does stop or at least slow down drastically. But I don’t see a scenario where constant retraining isn’t the norm, even if the rough amount of content we’re using for it grows only slightly.
replies(1): >>45067944 #
18. dcre ◴[] No.45056015{5}[source]
I think this is debatable as more models become good enough for more tasks. Maybe a smaller proportion of tasks will require SOTA models. On the other hand, the set of tasks people want to use LLMs for will expand along with the capabilities of SOTA models.
19. oblio ◴[] No.45056034{4}[source]
How would that look with at least 3 US companies, probably 2 Chinese ones and at least 1 European company developing state of the art LLMs?
replies(2): >>45056164 #>>45067589 #
20. drob518 ◴[] No.45056164{5}[source]
Like a very over-served market, I think. I see perhaps three survivors long term, or at most one gorilla, two chimps, and perhaps a few very small niche-focused monkeys.
21. DenisM ◴[] No.45067589{5}[source]
Network effects usually destroy or marginalized competition until they themselves start stagnating decaying. Sometimes they produce partially-overlapping duopolies, but maintain their monopoly-like power.

Facebook marginalized linkedin and sent twitter into a niche.

Internet Explorer and Windows destroyed competition, for a long while.

Google Search marginalized everyone for over 20 years.

These are multi-trillion-dollar businesses. If OpenAI creates a network effect of some sort they can join the league.

22. gomox ◴[] No.45067944{5}[source]
Well, models are definitely good enough for some things in their current state, without needing to be retrained (computer translation for example was a solved problem with GPT3)
replies(1): >>45070859 #
23. gomox ◴[] No.45067956{3}[source]
Got it, it's just an awfully specific term to use as a generic replacement for "cost" when the whole concept of COGS is essentially "not any cost, but specifically this kind" :)
24. drob518 ◴[] No.45070859{6}[source]
That’s true but irrelevant. No AI company is stopping training and further model development. OpenAI didn’t stop with GPT3, and they won’t stop with GPT5. No company, AI company or not, stops innovating in their market segment. You need to keep innovating to stay competitive.