←back to thread

507 points martinald | 1 comments | | HN request time: 0s | source
Show context
_sword ◴[] No.45055003[source]
I've done the modeling on this a few times and I always get to a place where inference can run at 50%+ gross margins, depending mostly on GPU depreciation and how good the host is at optimizing utilization. The challenge for the margins is whether or not you consider model training costs as part of the calculation. If model training isn't capitalized + amortized, margins are great. If they are amortized and need to be considered... yikes
replies(7): >>45055030 #>>45055275 #>>45055536 #>>45055820 #>>45055835 #>>45056242 #>>45056523 #
BlindEyeHalo ◴[] No.45055275[source]
Why wouldn't you factor in training? It is not like you can train once and then have the model run for years. You need to constantly improve to keep up with the competition. The lifespan of a model is just a few months at this point.
replies(7): >>45055303 #>>45055495 #>>45055624 #>>45055631 #>>45056110 #>>45056973 #>>45057517 #
jacurtis ◴[] No.45057517[source]
In a recent episode of Hard Fork podcast, the hosts discussed an on-the-record conversation they had with Sam Altman from OpenAI. They asked him about profitability and he claimed that they are losing money mostly because of the cost of training. But as the model advances, they will train less and less. Once you take training out of the equation he claimed they were profitable based on the cost of serving the trained foundation models to users at current prices.

Now, when he said that, his CFO corrected him and said they aren't profitable, but said "it's close".

Take that with a grain of salt, but thats a conversation from one of the big AI companies that is only a few weeks old. I suspect that it is pretty accurate that pricing is currently reasonable if you ignore training. But training is very expensive and the reason most AI companies are losing money right now.

replies(4): >>45057639 #>>45057962 #>>45060581 #>>45061058 #
dgfitz ◴[] No.45057962[source]
> But as the model advances, they will train less and less.

They sure have a lot of training to do between now and whenever that happens. Rolling back from 5 to whatever was before it is their own admission of this fact.

replies(1): >>45058471 #
mindwok ◴[] No.45058471[source]
I think that actually proves the opposite. People wanted an old model, not a new one, indicating that for that user base they could have just... not trained a new model.
replies(3): >>45058933 #>>45060288 #>>45060618 #
1. PeterStuer ◴[] No.45060618{3}[source]
That is for a very specific class of usecases. If they would turn up the sycophancy on the new model, those people would not call for the old onee.

The reasoning here is off. It is like saying new game development is nearly over as some people keep playing old games.

My feeling: we've yet barely scrarched the surface on the milage we can get out of even today's frontier models, but we are just at the beginning of a huge runway for improved models and architectures. Watch this space.