←back to thread

152 points isoprophlex | 1 comments | | HN request time: 0s | source
Show context
daft_pink ◴[] No.45645040[source]
I think this is a minor speed bump and VC’s believe that cost of inference will decrease over time and this is a gold rush to grab market share while cost of inference declines.

I don’t think they got it right and the market share and usage grew faster than inference dropped, but inference costs will clearly drop and these companies will eventually be very profitable.

Reality is that startups like this assume moore’s law will drop the cost over time and arrange their business around where they expect costs to be and not where costs currently are.

replies(6): >>45645108 #>>45645191 #>>45645220 #>>45645347 #>>45645403 #>>45645748 #
x0x0 ◴[] No.45645347[source]
> inference costs will clearly drop

They haven't though. On two fronts: 1, the soa models have been pretty constantly priced, and everyone wants the soa models. Likely the only way costs drop is the models get so good that people are like hey, I'm fine with a less useful answer (which is still good enough) and that seems, right now, like a bad bet.

and 2 - we use a lot more tokens now. No more pasting Q&A into a site; now people hammer up chunks of their codebases and would love to push more. More context, more thinking, more everything.

replies(3): >>45645435 #>>45645642 #>>45645775 #
1. infecto ◴[] No.45645642[source]
Anecdote of 1. Costs for openai on a per token basis have absolutely dropped and that accounts for new sota models over time. I think by now we can all agree that inference costs from providers are largely at or above breakeven. So more tokens is a good problem to have.