> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
They have to build the next model, or else people will go to someone else.
Our software house spends a lot on R&D sure, but we're still incredibly profitable all the same. If OpenAI is in a position where they effectively have to stop iterating the product to be profitable, I wouldn't call that a very good place to be when you're on the verge of having several hundred billion in debt.
However this does not work as well if your fixed (non-unit) cost is growing exponentially. You can’t get out of this unless your user base grows exponentially or the customer value (and price) per user grows exponentially.
I think this is what Altman is saying - this is an unusual situation: unit economy is positive but fixed costs are exploding faster than economy if scale can absorb it.
You can say it’s splitting hair, but insightful perspective often requires teasing things apart.
We know that businesses with tight network effects can grow to about 2 trillion in valuation.
Facebook marginalized linkedin and sent twitter into a niche.
Internet Explorer and Windows destroyed competition, for a long while.
Google Search marginalized everyone for over 20 years.
These are multi-trillion-dollar businesses. If OpenAI creates a network effect of some sort they can join the league.