←back to thread

507 points martinald | 3 comments | | HN request time: 0.019s | source
Show context
simonw ◴[] No.45054022[source]
https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat... quotes Sam Altman saying:

> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

replies(6): >>45054061 #>>45054069 #>>45054101 #>>45054102 #>>45054593 #>>45054858 #
dcre ◴[] No.45054061[source]
ICYMI, Amodei said the same in much greater detail:

"If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume, in this cartoonish cartoon example, that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model, in this example, is actually profitable.

What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's much more expensive and requires much more upfront R&D investment. And so the way that it's going to shake out is this will keep going up until the numbers go very large and the models can't get larger, and then it'll be a large, very profitable business, or, at some point, the models will stop getting better, right? The march to AGI will be halted for some reason, and then perhaps it'll be some overhang. So, there'll be a one-time, 'Oh man, we spent a lot of money and we didn't get anything for it.' And then the business returns to whatever scale it was at."

https://cheekypint.substack.com/p/a-cheeky-pint-with-anthrop...

replies(9): >>45054612 #>>45054646 #>>45054678 #>>45054731 #>>45054753 #>>45054819 #>>45055347 #>>45055378 #>>45055855 #
827a ◴[] No.45054678[source]
OpenAI and Anthropic have very different customer bases and usage profiles. I'd estimate a significantly higher percentage of Anthropic's tokens are paid by the customer than OpenAI's. The ChatGPT free tier is magnitudes more popular than Claude's free tier, and Anthropic in all likelihood does a higher percentage of API business versus consumer business than OpenAI does.

In other words, its possible this story is correct and true for Anthropic, but not true for OpenAI.

replies(1): >>45055126 #
dcre ◴[] No.45055126{3}[source]
Good point, very possible that Altman is excluding free tier as a marketing cost even if it loses more than they make on paid customers. On the other hand they may be able to cut free tier costs a lot by having the model router send queries to gpt-5-mini where before they were going to 4o.
replies(2): >>45055307 #>>45057591 #
1. DenisM ◴[] No.45055307{4}[source]
Free tier provides a lot of training material. Every time you correct ChatGPT on its mistakes you’re giving them knowledge that’s not in any book or website.

Thats a moat, albeit one that is slow to build.

replies(1): >>45056049 #
2. dcre ◴[] No.45056049[source]
That's interesting, though you have to imagine the data set is very low quality on average and distilling high quality training pairs out of it is very costly.
replies(1): >>45067464 #
3. DenisM ◴[] No.45067464[source]
Hence exponential increase in model training costs. Also hallucinations in the long tail of knowledge.