> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
> Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
"If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume, in this cartoonish cartoon example, that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model, in this example, is actually profitable.
What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's much more expensive and requires much more upfront R&D investment. And so the way that it's going to shake out is this will keep going up until the numbers go very large and the models can't get larger, and then it'll be a large, very profitable business, or, at some point, the models will stop getting better, right? The march to AGI will be halted for some reason, and then perhaps it'll be some overhang. So, there'll be a one-time, 'Oh man, we spent a lot of money and we didn't get anything for it.' And then the business returns to whatever scale it was at."
https://cheekypint.substack.com/p/a-cheeky-pint-with-anthrop...
Also, in Nike's case, as they grow they get better at making more shoes for cheaper. LLM model providers tell us that every new model (shoe) costs multiples more than the last one to develop. If they make 2x revenue on training, like he's said, to be profitable they have to either double prices or double users every year, or stop making new models.
A better metaphor would be oil and gas production, where existing oil and gas fields are either already finished (i.e. model is no longer SOTA -- no longer making a return on investment) or currently producing (SOTA inference -- making a return on investment). The key similarity with AI is new oil and gas fields are increasingly expensive to bring online because they are harder to make economical than the first ones we stumbled across bubbling up in the desert, and that's even with technological innovation. That is to say, the low hanging fruit is long gone.
This largely was the case in software in the '80s-'10s (when versions largely disappeared) and still is the case in hardware. iPhone 17 will certainly cost far more to develop than did iPhone 10 or 5. iPhone 5 cost far more than 3G, etc.
You could see here: https://www.reddit.com/r/dataisbeautiful/comments/16dr1kb/oc...
new ones are generally cheaper if adjusted for inflation. This is a sale price, but assuming that margins stay the same it should reflect the manufacturing price. And from what I remember about apple earnings their margins increased over time, so it means the new phones are even cheaper. Which kind of makes sense.
Recent iPhones use Apple's own custom silicon for a number of components, and are generally vastly more complex. The estimates I have seen for iPhone 1 development range from $150 million to $2.5 billion. Even adjusting for inflation, a current iPhone generation costs more than the older versions.
And it absolutely makes sense for Apple to spend more in total to develop successive generations, because they have less overall product risk and larger scale to recoup.