←back to thread

197 points baylearn | 2 comments | | HN request time: 0.61s | source
Show context
JunkDNA ◴[] No.44473050[source]
I keep seeing this charge that AI companies have an “Uber problem” meaning the business is heavily subsidized by VC. Is there any analysis that has been done that explains how this breaks down (training vs inference and what current pricing is)? At least with Uber you had a cab fare as a benchmark. But what should, for example, ChatGPT actually cost me per month without the VC subsidy? How far off are we?
replies(2): >>44473084 #>>44474853 #
1. fragmede ◴[] No.44474853[source]
It depends on how far behind you believe the model-available LLMs are. If I can buy, say, $10k worth of hardware and run a sufficiently equivalent LLM at home for the cost of that plus electricity, and amortize that over say 5 years to get $2k/yr plus electricity, and say you use it 40 hours a week for 50 weeks, for 2000 hours, gets you $1/hr plus electricity. That electrical cost will vary depending on location, but let's just handwave $1/hr (which should be high). So $2/hr vs ChatGPT's $0.11/hr if you pay $20/month and use it 174 hours per month.

Feel free to challenge these numbers, but it's a starting place. What's not accounted for is the cost of training (compute time, but also employee and everything else), which needs to be amortized over the length of time a model is used, so ChatGPT's costs rise significantly, but they do have the advantage that hardware is shared across multiple users.

replies(1): >>44475184 #
2. nbardy ◴[] No.44475184[source]
These estimates are way off. The concurrent requests are near free with the right serving infrastructure. The throughput per token per dollar is 1/100-1/1000 the price for a full saturated node.