←back to thread

507 points martinald | 1 comments | | HN request time: 0s | source
Show context
_sword ◴[] No.45055003[source]
I've done the modeling on this a few times and I always get to a place where inference can run at 50%+ gross margins, depending mostly on GPU depreciation and how good the host is at optimizing utilization. The challenge for the margins is whether or not you consider model training costs as part of the calculation. If model training isn't capitalized + amortized, margins are great. If they are amortized and need to be considered... yikes
replies(7): >>45055030 #>>45055275 #>>45055536 #>>45055820 #>>45055835 #>>45056242 #>>45056523 #
BlindEyeHalo ◴[] No.45055275[source]
Why wouldn't you factor in training? It is not like you can train once and then have the model run for years. You need to constantly improve to keep up with the competition. The lifespan of a model is just a few months at this point.
replies(7): >>45055303 #>>45055495 #>>45055624 #>>45055631 #>>45056110 #>>45056973 #>>45057517 #
vonneumannstan ◴[] No.45055624[source]
I suspect we've already reached the point with models at the GPT5 tier where the average person will no longer recognize improvements and this model can be slightly improved at slow intervals and indeed run for years. Meanwhile research grade models will still need to be trained at massive cost to improve performance on relatively short time scales.
replies(4): >>45055819 #>>45056941 #>>45059324 #>>45059712 #
ewoodrich ◴[] No.45059712[source]
I may not qualify as an "average user" but I shudder imagining being stuck using a 1+ yr stale model for development given my experiences using a newer framework than what was available during training.

Passing in docs usually helps, but I've had some incredibly aggravating experiences where a model just absolutely cannot accept their "mental mode" is incorrect and that they need to forget the tens of thousands of lines of out of date example code they've ingested during training. IMO it's an under-discussed aspect of the current effectiveness of LLM development thanks to the training arms race.

I recently had to fight Gemini to accept that a library (a Google developed AI library for JS, somewhat ironically) had just released a major version update with a lot of API changes that invalidated 99% of the docs and example code online. And boy was there a lot of old code floating around thanks to the vast amounts of SEO blog spam for anything AI adjacent.

replies(1): >>45064786 #
1. vonneumannstan ◴[] No.45064786[source]
>Passing in docs usually helps, but I've had some incredibly aggravating experiences where a model just absolutely cannot accept their "mental mode" is incorrect and that they need to forget the tens of thousands of lines of out of date example code they've ingested during training. IMO it's an under-discussed aspect of the current effectiveness of LLM development thanks to the training arms race.

I think you overestimate the amount of code turnover in 6-12 months...