> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
"If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume, in this cartoonish cartoon example, that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model, in this example, is actually profitable.
What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's much more expensive and requires much more upfront R&D investment. And so the way that it's going to shake out is this will keep going up until the numbers go very large and the models can't get larger, and then it'll be a large, very profitable business, or, at some point, the models will stop getting better, right? The march to AGI will be halted for some reason, and then perhaps it'll be some overhang. So, there'll be a one-time, 'Oh man, we spent a lot of money and we didn't get anything for it.' And then the business returns to whatever scale it was at."
https://cheekypint.substack.com/p/a-cheeky-pint-with-anthrop...
IE OpenAI invests in Cursor/Windsurf/Startups that give away credits to users and make heavy use of inference API. Money flows back to OpenAI then OpenAI sends it back to those companies via credits/investment $.
It's even more circular in this case because nvidia is also funding companies that generate significant inference.
It'll be quite difficult to figure out whether it's actually profitable until the new investment dollars start to dry up.
All of these alternatives means different things when you say it takes +20 seconds for a full response.
They have to build the next model, or else people will go to someone else.
Our software house spends a lot on R&D sure, but we're still incredibly profitable all the same. If OpenAI is in a position where they effectively have to stop iterating the product to be profitable, I wouldn't call that a very good place to be when you're on the verge of having several hundred billion in debt.
They aren't yet profitable even just on inference, and its possible Sam didn't know that until very recently.
[1] https://www.nytimes.com/2025/08/22/podcasts/is-this-an-ai-bu...
OpenAI's fund is ~$250-300mm Nvidia reportedly invested $1b last year - still way less than Open AI revenue
In other words, its possible this story is correct and true for Anthropic, but not true for OpenAI.
https://www.reuters.com/legal/government/anthropics-surprise...
Also, in Nike's case, as they grow they get better at making more shoes for cheaper. LLM model providers tell us that every new model (shoe) costs multiples more than the last one to develop. If they make 2x revenue on training, like he's said, to be profitable they have to either double prices or double users every year, or stop making new models.
A better metaphor would be oil and gas production, where existing oil and gas fields are either already finished (i.e. model is no longer SOTA -- no longer making a return on investment) or currently producing (SOTA inference -- making a return on investment). The key similarity with AI is new oil and gas fields are increasingly expensive to bring online because they are harder to make economical than the first ones we stumbled across bubbling up in the desert, and that's even with technological innovation. That is to say, the low hanging fruit is long gone.
That is an openai skeptic. His research if correct says not only is openai unprofitable but it likely never will be. Can't be ,its various finance ratios make early uber, amazon ect look downright fiscally frugal.
He is not a tech person for what that means to you.
You'd think maybe the CEO might be able to give a ball park on the profit made off that 2023 model.
ETA: "You paid $100 million... There's some cost to inference with the model, but let's just assume ... that even if you add those two up, you're kind of in a good state."
You see this right? He literally says that if you assume revenue exceeds costs then it's profitable. He doesn't actually say that it does though.
Basically each new company puts competitive pressure on the previous company, and together they compress margins.
They are racing themselves to the bottom. I imagine they know this and bet on AGI primacy.
Model as a product is the reality, but each model competes with previous models and is only successful if it's both more cost effective, and also more effective in general at its tasks. By the time you get to model Z, you'll never use model A for any task as the model lineage cannibalizes sales of itself.
However this does not work as well if your fixed (non-unit) cost is growing exponentially. You can’t get out of this unless your user base grows exponentially or the customer value (and price) per user grows exponentially.
I think this is what Altman is saying - this is an unusual situation: unit economy is positive but fixed costs are exploding faster than economy if scale can absorb it.
You can say it’s splitting hair, but insightful perspective often requires teasing things apart.
We know that businesses with tight network effects can grow to about 2 trillion in valuation.
This is clearly the case for models as well. Training and serving inference for GPT4 level models is probably > 100x cheaper than they used to be. Nike has been making Jordan 1's for 40+ years! OpenAI would be incredibly profitable if they could live off the profit from improved inference efficiency on a GPT4 level model!
If you don't like "model as company," how about "model as making a movie?" Any given movie could be profitable or not. It's not necessarily the case that movie budgets always get bigger or that an increased budget is what you need to attract an audience.
>>OpenAI would be incredibly profitable if they could live off the profit from improved inference efficiency on a GPT4 level model!
If gpt4 was basically free money at this point it's real weird that their first instinct was to cut it off after gpt5
each node is much more expensive to design for, but when you finally have it you basically print money.
and of course you always have to develop next more powerful and power efficient CPU to keep competitive
> ICYMI, Amodei said the same
No. He says that even paying for training a model is profitable. It makes more revenue that it costs - all things considered. A much stronger claim.
Uber burnt through a lot of money and even now I'm not sure their lifetime revenue is positive (it's possible that since their foundation they've lost more money than they've made).
This largely was the case in software in the '80s-'10s (when versions largely disappeared) and still is the case in hardware. iPhone 17 will certainly cost far more to develop than did iPhone 10 or 5. iPhone 5 cost far more than 3G, etc.
You could see here: https://www.reddit.com/r/dataisbeautiful/comments/16dr1kb/oc...
new ones are generally cheaper if adjusted for inflation. This is a sale price, but assuming that margins stay the same it should reflect the manufacturing price. And from what I remember about apple earnings their margins increased over time, so it means the new phones are even cheaper. Which kind of makes sense.
Recent iPhones use Apple's own custom silicon for a number of components, and are generally vastly more complex. The estimates I have seen for iPhone 1 development range from $150 million to $2.5 billion. Even adjusting for inflation, a current iPhone generation costs more than the older versions.
And it absolutely makes sense for Apple to spend more in total to develop successive generations, because they have less overall product risk and larger scale to recoup.
People find the UX of choosing a model very confusing, the idea with 5 is that it would route things appropriately and so eliminate this confusion. That was the motivation for removing 4. But people were upset enough that they decided to bring it back for a while, at least.
However, at the same time, I was using Claude much less, really preferring the answers from it most of the time, and constantly being hit with limits. So guess what I did. I cancelled my OpenAI subscription and moved to Anthropic. Not only do i get Claude Code, which OpenAI really has no serious competitor for.
I still use both models but never run into problems with OpenAI, so i see no reason to pay for it.
He is not wrong about everything. For example, after Sam Altman said in January that OpenAI would introduce a model picker, Zitron was able to predict in March that OpenAI would introduce a model picker. And he was right about that.
It's not responsive at all to Zitron's point. Zitron's broader contention is that AI tools are not profitable because the cost of AI use is too high for users to justify spending money on the output, given the quality of output. And furthermore, he argues that this basic fact is being obscured by lots of shell games around numbers to hide the basic cash flow issue. For example, focusing on cost in terms of cost per token rather than cost per task. And finally, there's an implicit assumption that the AI just isn't getting tremendously better, as might be exemplified by... burning twice as money tokens on the task in the hopes the quality goes up.
And in that context, the response is "Aha, he admits that there is a knob to trade off cost and quality! Entire argument debunked!" The existence of a cost-quality tradeoff doesn't speak to whether or not that line will intersect the quality-value tradeoff. I grant that a lot turns on how good you think AI is and/or will shortly be, and Zitron is definitely a pessimist there.
Ed doesn’t really make that argument anymore. The more recent form of the point is: yes, clearly people are willing to pay for it, but only because the providers are burning VC money to sell it below cost. If sold at a profit, customers would no longer find it worth it. But that’s completely different from what you’re saying. And I also think that’s not true, for a few reasons: mostly that selling near cost is the simplest explanation for the similarity of prices between providers. And now recently we have both Altman and Amodei saying their companies are selling inference at a profit.
> At first, he answered, no, we would be profitable, if not for training new models. Essentially, if you take away all the stuff, all the money we’re spending on building new models and just look at the cost of serving the existing models, we are profitable on that basis. And then he looked at Brad Lightcap, who is the COO. And he sort of said, right? And Brad kind of squirmed in his seat a little bit and was like, well — He’s like, we’re pretty close.
I don't think you can square that with what he stated in the Axios article:
> "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
Except, if the NYT dinner happened after the Axios article interview, which is possible given when each was published, and he was actually literally unaware of the company's financials.
Personally: it feels like it should reflect very poorly on OpenAI that their CEO has been, charitably, entirely unaware how close they are to profitability (and uncharitably, that he actively lies about it). But I'm not sure if the broader news cycle caught it; the only place I've heard this mentioned is literally this NYT Hard Fork podcast which is hosted by the people who were at the dinner.
I'm not an AI hater. I genuinely hope it take over every single white collar job that exists. I'm not being sarcastic or hyperbolic. Only then will we be able to re-discuss what society is in a more humane way.
Facebook marginalized linkedin and sent twitter into a niche.
Internet Explorer and Windows destroyed competition, for a long while.
Google Search marginalized everyone for over 20 years.
These are multi-trillion-dollar businesses. If OpenAI creates a network effect of some sort they can join the league.