←back to thread

507 points martinald | 2 comments | | HN request time: 0.403s | source
Show context
_sword ◴[] No.45055003[source]
I've done the modeling on this a few times and I always get to a place where inference can run at 50%+ gross margins, depending mostly on GPU depreciation and how good the host is at optimizing utilization. The challenge for the margins is whether or not you consider model training costs as part of the calculation. If model training isn't capitalized + amortized, margins are great. If they are amortized and need to be considered... yikes
replies(7): >>45055030 #>>45055275 #>>45055536 #>>45055820 #>>45055835 #>>45056242 #>>45056523 #
BlindEyeHalo ◴[] No.45055275[source]
Why wouldn't you factor in training? It is not like you can train once and then have the model run for years. You need to constantly improve to keep up with the competition. The lifespan of a model is just a few months at this point.
replies(7): >>45055303 #>>45055495 #>>45055624 #>>45055631 #>>45056110 #>>45056973 #>>45057517 #
vonneumannstan ◴[] No.45055624[source]
I suspect we've already reached the point with models at the GPT5 tier where the average person will no longer recognize improvements and this model can be slightly improved at slow intervals and indeed run for years. Meanwhile research grade models will still need to be trained at massive cost to improve performance on relatively short time scales.
replies(4): >>45055819 #>>45056941 #>>45059324 #>>45059712 #
1. felipeerias ◴[] No.45059324[source]
The "Pro" variant of GTP-5 is probably the best model around and most people are not even aware that it exists. One reason is that as models get more capable, they also get a lot more expensive to run so this "Pro" is only available at the $200/month pro plan.

At the same time, more capable models are also a lot more expensive to train.

The key point is that the relationship between all these magnitudes is not linear, so the economics of the whole thing start to look wobbly.

Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.

replies(1): >>45064826 #
2. vonneumannstan ◴[] No.45064826[source]
>Soon we will probably arrive at a point where these huge training runs must stop, because the performance improvement does not match the huge cost increase, and because the resulting model would be so expensive to run that the market for it would be too small.

I think we're a lot more likely to get to the limit of power and compute available for training a bigger model before we get to the point where improvement stops.