> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
IE OpenAI invests in Cursor/Windsurf/Startups that give away credits to users and make heavy use of inference API. Money flows back to OpenAI then OpenAI sends it back to those companies via credits/investment $.
It's even more circular in this case because nvidia is also funding companies that generate significant inference.
It'll be quite difficult to figure out whether it's actually profitable until the new investment dollars start to dry up.
OpenAI's fund is ~$250-300mm Nvidia reportedly invested $1b last year - still way less than Open AI revenue
That is an openai skeptic. His research if correct says not only is openai unprofitable but it likely never will be. Can't be ,its various finance ratios make early uber, amazon ect look downright fiscally frugal.
He is not a tech person for what that means to you.
Uber burnt through a lot of money and even now I'm not sure their lifetime revenue is positive (it's possible that since their foundation they've lost more money than they've made).
He is not wrong about everything. For example, after Sam Altman said in January that OpenAI would introduce a model picker, Zitron was able to predict in March that OpenAI would introduce a model picker. And he was right about that.
It's not responsive at all to Zitron's point. Zitron's broader contention is that AI tools are not profitable because the cost of AI use is too high for users to justify spending money on the output, given the quality of output. And furthermore, he argues that this basic fact is being obscured by lots of shell games around numbers to hide the basic cash flow issue. For example, focusing on cost in terms of cost per token rather than cost per task. And finally, there's an implicit assumption that the AI just isn't getting tremendously better, as might be exemplified by... burning twice as money tokens on the task in the hopes the quality goes up.
And in that context, the response is "Aha, he admits that there is a knob to trade off cost and quality! Entire argument debunked!" The existence of a cost-quality tradeoff doesn't speak to whether or not that line will intersect the quality-value tradeoff. I grant that a lot turns on how good you think AI is and/or will shortly be, and Zitron is definitely a pessimist there.
Ed doesn’t really make that argument anymore. The more recent form of the point is: yes, clearly people are willing to pay for it, but only because the providers are burning VC money to sell it below cost. If sold at a profit, customers would no longer find it worth it. But that’s completely different from what you’re saying. And I also think that’s not true, for a few reasons: mostly that selling near cost is the simplest explanation for the similarity of prices between providers. And now recently we have both Altman and Amodei saying their companies are selling inference at a profit.
I'm not an AI hater. I genuinely hope it take over every single white collar job that exists. I'm not being sarcastic or hyperbolic. Only then will we be able to re-discuss what society is in a more humane way.