←back to thread

387 points reaperducer | 4 comments | | HN request time: 0.795s | source
Show context
jacquesm ◴[] No.45772081[source]
These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.
replies(6): >>45772090 #>>45772213 #>>45772293 #>>45772318 #>>45772433 #>>45774073 #
zemvpferreira ◴[] No.45772318[source]
There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.

OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.

replies(13): >>45772378 #>>45772392 #>>45772490 #>>45772554 #>>45772661 #>>45772731 #>>45772738 #>>45772759 #>>45773088 #>>45773089 #>>45773096 #>>45773105 #>>45774229 #
api ◴[] No.45772554[source]
The assumption is that they have a large moat.

If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.

This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.

OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.

The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.

replies(2): >>45772632 #>>45772671 #
1. delis-thumbs-7e ◴[] No.45772671[source]
Apple new M5 can run models over 10B parametres and if they give their new Studio next year enough juice, it can run maybe 30B local model. How long is it that you can run a full GPT-5 on your laptop or homeserver with few grands worth of hardware? What is going to happen to all these GPU farms, since as I understood they are fairly useless for anything else?
replies(2): >>45773442 #>>45773729 #
2. treis ◴[] No.45773442[source]
Very few people own top of the line Macs and most interactions are on phones these days. We are many generations of phones away from running GPT-5 on a phone without murdering your battery.

Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.

3. api ◴[] No.45773729[source]
Quantized, a top-end Mac can run models up to about 200B (with 128GiB of unified RAM). They'll run a little slow but they're usable.

This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.

replies(1): >>45781528 #
4. delis-thumbs-7e ◴[] No.45781528[source]
They are pretty cheap compared to _actual_ costs of GPU farms, or buying A100 though. Of course not everybody will buy these machines, but everybody don’t really need high powered LLM’s either. Prob 13B Mistral can be trained to do your homework and pretend to he your girlfriend.