←back to thread

507 points martinald | 1 comments | | HN request time: 0.001s | source
Show context
simonw ◴[] No.45054022[source]
https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat... quotes Sam Altman saying:

> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

replies(6): >>45054061 #>>45054069 #>>45054101 #>>45054102 #>>45054593 #>>45054858 #
drob518 ◴[] No.45054101[source]
Which is like saying, “If all we did is charge people money and didn’t have any COGS, we’d be a very profitable company.” That’s a truism of every business and therefore basically meaningless.
replies(3): >>45054218 #>>45054231 #>>45054405 #
dcre ◴[] No.45054231[source]
The Amodei quote in my other reply explains why this is wrong. The point is not to compare the training of the current model to inference on the current model. The thing that makes them lose so much money is that they are training the next model while making back their training cost on the current model. So it's not COGS at all.
replies(3): >>45054361 #>>45054385 #>>45055034 #
1. drob518 ◴[] No.45055034{3}[source]
So,if they stopped training they’d be profitable? Only in some incremental sense, ignoring all sunk costs.