←back to thread

507 points martinald | 2 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.45054022[source]
https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat... quotes Sam Altman saying:

> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

replies(6): >>45054061 #>>45054069 #>>45054101 #>>45054102 #>>45054593 #>>45054858 #
aeternum ◴[] No.45054102[source]
This can be technically true without being actually true.

IE OpenAI invests in Cursor/Windsurf/Startups that give away credits to users and make heavy use of inference API. Money flows back to OpenAI then OpenAI sends it back to those companies via credits/investment $.

It's even more circular in this case because nvidia is also funding companies that generate significant inference.

It'll be quite difficult to figure out whether it's actually profitable until the new investment dollars start to dry up.

replies(3): >>45054665 #>>45054677 #>>45055299 #
citizenpaul ◴[] No.45055299[source]
There a journalist ed zittron

https://www.wheresyoured.at/

That is an openai skeptic. His research if correct says not only is openai unprofitable but it likely never will be. Can't be ,its various finance ratios make early uber, amazon ect look downright fiscally frugal.

He is not a tech person for what that means to you.

replies(2): >>45056072 #>>45056292 #
dcre ◴[] No.45056292[source]
Zitron is not a serious analyst.

https://bsky.app/profile/davidcrespo.bsky.social/post/3lxale...

https://bsky.app/profile/davidcrespo.bsky.social/post/3lo22k...

https://bsky.app/profile/davidcrespo.bsky.social/post/3lwhhz...

https://bsky.app/profile/davidcrespo.bsky.social/post/3lv2dx...

replies(3): >>45057396 #>>45057999 #>>45060687 #
1. jcranmer ◴[] No.45057999[source]
Since only the first one responds to any of Zitron's content that I've actually read, I'll respond only to that one:

It's not responsive at all to Zitron's point. Zitron's broader contention is that AI tools are not profitable because the cost of AI use is too high for users to justify spending money on the output, given the quality of output. And furthermore, he argues that this basic fact is being obscured by lots of shell games around numbers to hide the basic cash flow issue. For example, focusing on cost in terms of cost per token rather than cost per task. And finally, there's an implicit assumption that the AI just isn't getting tremendously better, as might be exemplified by... burning twice as money tokens on the task in the hopes the quality goes up.

And in that context, the response is "Aha, he admits that there is a knob to trade off cost and quality! Entire argument debunked!" The existence of a cost-quality tradeoff doesn't speak to whether or not that line will intersect the quality-value tradeoff. I grant that a lot turns on how good you think AI is and/or will shortly be, and Zitron is definitely a pessimist there.

replies(1): >>45058802 #
2. dcre ◴[] No.45058802[source]
Already in your first point you are mixing up two claims Ed also likes to mix up. The funny thing is these claims are in direct conflict with each other. There is the question of whether people find AI worth paying for given what they get. You seem to think this is in some doubt, meanwhile here are tons of people paying for it, some even begging to be allowed to pay more in order to get more. The labs have revenue growing 20% per month. So I think that version of the point is absurd on its face. (And that's exactly why my thing about the cost-quality tradeoff being real is relevant. At least we agree on the relationship between these points.)

Ed doesn’t really make that argument anymore. The more recent form of the point is: yes, clearly people are willing to pay for it, but only because the providers are burning VC money to sell it below cost. If sold at a profit, customers would no longer find it worth it. But that’s completely different from what you’re saying. And I also think that’s not true, for a few reasons: mostly that selling near cost is the simplest explanation for the similarity of prices between providers. And now recently we have both Altman and Amodei saying their companies are selling inference at a profit.