Most active commenters
  • dcre(5)
  • drob518(3)

←back to thread

507 points martinald | 14 comments | | HN request time: 0.001s | source | bottom
Show context
simonw ◴[] No.45054022[source]
https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat... quotes Sam Altman saying:

> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

replies(6): >>45054061 #>>45054069 #>>45054101 #>>45054102 #>>45054593 #>>45054858 #
drob518 ◴[] No.45054101[source]
Which is like saying, “If all we did is charge people money and didn’t have any COGS, we’d be a very profitable company.” That’s a truism of every business and therefore basically meaningless.
replies(3): >>45054218 #>>45054231 #>>45054405 #
1. dcre ◴[] No.45054231[source]
The Amodei quote in my other reply explains why this is wrong. The point is not to compare the training of the current model to inference on the current model. The thing that makes them lose so much money is that they are training the next model while making back their training cost on the current model. So it's not COGS at all.
replies(3): >>45054361 #>>45054385 #>>45055034 #
2. prasadjoglekar ◴[] No.45054361[source]
Well, only if the one training model continued to function as a going business. Their amortization window for the training cost is 2 months or so. They can't just keep that up and collect $.

They have to build the next model, or else people will go to someone else.

replies(1): >>45055004 #
3. ToucanLoucan ◴[] No.45054385[source]
So is OpenAI capable of not making a new model at some point? They've been training the next model continuously as long as they've existed AFAIK.

Our software house spends a lot on R&D sure, but we're still incredibly profitable all the same. If OpenAI is in a position where they effectively have to stop iterating the product to be profitable, I wouldn't call that a very good place to be when you're on the verge of having several hundred billion in debt.

replies(2): >>45055021 #>>45055515 #
4. dcre ◴[] No.45055004[source]
Why two months? It was almost a year between Claude 3.5 and 4. (Not sure how much it costs to go from 3.5 to 3.7.)
replies(2): >>45055861 #>>45055934 #
5. dcre ◴[] No.45055021[source]
I think at that point there is strong financial pressure to figure out how to continuously evolve models instead of changing new ones, for example by building models out of smaller modules that can be trained individually and swapped out. Jeff Dean and Noam Shazeer talked about that a bit in their interview with Dwarkesh: https://www.dwarkesh.com/p/jeff-dean-and-noam-shazeer
6. drob518 ◴[] No.45055034[source]
So,if they stopped training they’d be profitable? Only in some incremental sense, ignoring all sunk costs.
7. DenisM ◴[] No.45055515[source]
There’s still untapped value in deeper integrations. They might hit a jackpot of exponentially increasing value from network effects caused by tight integration with e.g. disjoint business processes.

We know that businesses with tight network effects can grow to about 2 trillion in valuation.

replies(1): >>45056034 #
8. Jalad ◴[] No.45055861{3}[source]
Even being generous, and saying it's a year, most capital expenditures depreciate over a period of 5-7 years. To state the obvious, training one model a year is not a saving grace
replies(1): >>45055983 #
9. oblio ◴[] No.45055934{3}[source]
Don't they need to accelerate that, though? Having a 1 year old model isn't really great, it's just tolerable.
replies(1): >>45056015 #
10. dcre ◴[] No.45055983{4}[source]
I don't understand why the absolute time period matters — all that matters is that you get enough time making money on inference to make up for the cost of training.
11. dcre ◴[] No.45056015{4}[source]
I think this is debatable as more models become good enough for more tasks. Maybe a smaller proportion of tasks will require SOTA models. On the other hand, the set of tasks people want to use LLMs for will expand along with the capabilities of SOTA models.
12. oblio ◴[] No.45056034{3}[source]
How would that look with at least 3 US companies, probably 2 Chinese ones and at least 1 European company developing state of the art LLMs?
replies(2): >>45056164 #>>45067589 #
13. drob518 ◴[] No.45056164{4}[source]
Like a very over-served market, I think. I see perhaps three survivors long term, or at most one gorilla, two chimps, and perhaps a few very small niche-focused monkeys.
14. DenisM ◴[] No.45067589{4}[source]
Network effects usually destroy or marginalized competition until they themselves start stagnating decaying. Sometimes they produce partially-overlapping duopolies, but maintain their monopoly-like power.

Facebook marginalized linkedin and sent twitter into a niche.

Internet Explorer and Windows destroyed competition, for a long while.

Google Search marginalized everyone for over 20 years.

These are multi-trillion-dollar businesses. If OpenAI creates a network effect of some sort they can join the league.