> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
"If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume, in this cartoonish cartoon example, that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model, in this example, is actually profitable.
What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's much more expensive and requires much more upfront R&D investment. And so the way that it's going to shake out is this will keep going up until the numbers go very large and the models can't get larger, and then it'll be a large, very profitable business, or, at some point, the models will stop getting better, right? The march to AGI will be halted for some reason, and then perhaps it'll be some overhang. So, there'll be a one-time, 'Oh man, we spent a lot of money and we didn't get anything for it.' And then the business returns to whatever scale it was at."
https://cheekypint.substack.com/p/a-cheeky-pint-with-anthrop...
In other words, its possible this story is correct and true for Anthropic, but not true for OpenAI.
However, at the same time, I was using Claude much less, really preferring the answers from it most of the time, and constantly being hit with limits. So guess what I did. I cancelled my OpenAI subscription and moved to Anthropic. Not only do i get Claude Code, which OpenAI really has no serious competitor for.
I still use both models but never run into problems with OpenAI, so i see no reason to pay for it.