> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
They aren't yet profitable even just on inference, and its possible Sam didn't know that until very recently.
[1] https://www.nytimes.com/2025/08/22/podcasts/is-this-an-ai-bu...
> At first, he answered, no, we would be profitable, if not for training new models. Essentially, if you take away all the stuff, all the money we’re spending on building new models and just look at the cost of serving the existing models, we are profitable on that basis. And then he looked at Brad Lightcap, who is the COO. And he sort of said, right? And Brad kind of squirmed in his seat a little bit and was like, well — He’s like, we’re pretty close.
I don't think you can square that with what he stated in the Axios article:
> "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
Except, if the NYT dinner happened after the Axios article interview, which is possible given when each was published, and he was actually literally unaware of the company's financials.
Personally: it feels like it should reflect very poorly on OpenAI that their CEO has been, charitably, entirely unaware how close they are to profitability (and uncharitably, that he actively lies about it). But I'm not sure if the broader news cycle caught it; the only place I've heard this mentioned is literally this NYT Hard Fork podcast which is hosted by the people who were at the dinner.