←back to thread

507 points martinald | 2 comments | | HN request time: 0.001s | source
Show context
_sword ◴[] No.45055003[source]
I've done the modeling on this a few times and I always get to a place where inference can run at 50%+ gross margins, depending mostly on GPU depreciation and how good the host is at optimizing utilization. The challenge for the margins is whether or not you consider model training costs as part of the calculation. If model training isn't capitalized + amortized, margins are great. If they are amortized and need to be considered... yikes
replies(7): >>45055030 #>>45055275 #>>45055536 #>>45055820 #>>45055835 #>>45056242 #>>45056523 #
BlindEyeHalo ◴[] No.45055275[source]
Why wouldn't you factor in training? It is not like you can train once and then have the model run for years. You need to constantly improve to keep up with the competition. The lifespan of a model is just a few months at this point.
replies(7): >>45055303 #>>45055495 #>>45055624 #>>45055631 #>>45056110 #>>45056973 #>>45057517 #
jacurtis ◴[] No.45057517[source]
In a recent episode of Hard Fork podcast, the hosts discussed an on-the-record conversation they had with Sam Altman from OpenAI. They asked him about profitability and he claimed that they are losing money mostly because of the cost of training. But as the model advances, they will train less and less. Once you take training out of the equation he claimed they were profitable based on the cost of serving the trained foundation models to users at current prices.

Now, when he said that, his CFO corrected him and said they aren't profitable, but said "it's close".

Take that with a grain of salt, but thats a conversation from one of the big AI companies that is only a few weeks old. I suspect that it is pretty accurate that pricing is currently reasonable if you ignore training. But training is very expensive and the reason most AI companies are losing money right now.

replies(4): >>45057639 #>>45057962 #>>45060581 #>>45061058 #
1. anothernewdude ◴[] No.45060581[source]
Unfortunately for those companies, their APIs are a commodity, and are very fungible. So they'll need to keep training or be replaced with whichever competitor will. This is an exercise in attrition.
replies(1): >>45062777 #
2. LadyCailin ◴[] No.45062777[source]
I wonder if we’re reaching a point of diminishing returns with training, at least, just by scaling the data set. I mean, there’s a finite amount of information (that can be obtained reasonably) to be trained on. I think we’re already at a sizable chunk of that, not to mention the cost of naively scaling up. My guess is that the ultimate winner will be the one that figures out how to improve without massive training costs, through better algorithms, or maybe even just better hardware (i.e. neuristors). I mean, we know that at worst case, we should be able to build something with human level intelligence that takes about 20 watts to run, and is about the size of a human head, and you only need to ingest a small slice of all available information to do that. And training should only use about 3.5 MWh, total, and can be done with the same hardware that runs the model.