←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 2 comments | | HN request time: 0s | source
Show context
hn_throwaway_99 ◴[] No.41896346[source]
Reading this makes me willing to bet that this capital intensive investment boom will be similar to other enormous capital investment booms in US history, such as the laying of the railroads in the 1800s, the proliferation of car companies in the early 1900s, and the telecom fiber boom in the late 1900s. In all of these cases there was an enormous infrastructure (over) build out, followed by a crash where nearly all the companies in the industry ended up in bankruptcy, but then that original infrastructure build out had huge benefits for the economy and society as that infrastructure was "soaked up" in the subsequent years. E.g. think of all the telecom investment and subsequent bankruptcies in the late 90s/early 00s, but then all that dark fiber that was laid was eventually lit up and allowed for the explosion of high quality multimedia growth (e.g. Netflix and the like).

I think that will happen here. I think your average investor who's currently paying for all these advanced chips, data centers and energy supplies will walk away sorely disappointed, but this investment will yield huge dividends down the road. Heck, I think the energy investment alone will end up accelerating the switch away from fossil fuels, despite AI often being portrayed as a giant climate warming energy hog (which I'm not really disputing, but now that renewables are the cheapest form of energy, I believe this huge, well-funded demand will accelerate the growth of non-carbon energy sources).

replies(21): >>41896376 #>>41896426 #>>41896447 #>>41896726 #>>41898086 #>>41898206 #>>41898291 #>>41898436 #>>41898540 #>>41899659 #>>41900309 #>>41900633 #>>41903200 #>>41903363 #>>41903416 #>>41903838 #>>41903917 #>>41904566 #>>41905630 #>>41905809 #>>41906189 #
aurareturn ◴[] No.41896447[source]
I'm sure you are right. At some point, the bubble will crash.

The question remains is when the bubble will crash. We could be in the 1995 equivalent of the dotcom boom and not 1999. If so, we have 4 more years of high growth and even after the crash, the market will still be much bigger in 2029 than in 2024. Cisco was still 4x bigger in 2001 than in 1995.

One thing that is slightly different from past bubbles is that the more compute you have, the smarter and more capable AI.

One gauge I use to determine if we are still at the beginning of the boom is this: Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat? We don't have this yet - most likely because it's probably still too expensive to do this much inference with such high context window. We still need a lot more compute and better models.

Because of the above, I'm in the camp that believe we are actually closer to the beginning of the bubble than at the end.

Another thing I would watch closely to see when the bubble might pop is if LLM scaling laws are quickly breaking down and that more compute no longer yields more intelligence in an economical way. If so, I think the bubble would pop. All eyes are on GPT5-class models for signs.

replies(8): >>41896552 #>>41896790 #>>41898712 #>>41899018 #>>41899201 #>>41903550 #>>41904788 #>>41905320 #
HarHarVeryFunny ◴[] No.41896790[source]
> the more compute you have, the smarter and more capable AI

Well, this is taken on faith by OpenAI/etc, but obviously the curve has to flatten at some point, and appears to already be doing so. OpenAI are now experimenting with scaling inference-time compute (GPT-O1), but have said that it takes exponential increases in compute to produce linear gains in performance, so it remains to be seen if customers find this a worthwhile value.

replies(1): >>41896900 #
aurareturn ◴[] No.41896900[source]
GPT-o1 does demonstrate my point: the more compute you have, the smarter the AI.

If you run chain of thoughts on an 8B model, it becomes a lot smarter too.

GPT-o1 isn't GPT5 though. I think OpenAI will have a chain of thoughts model for GPT5-class models as well. They're separate from normal models.

replies(1): >>41896980 #
HarHarVeryFunny ◴[] No.41896980[source]
There is only so much that an approach like O1 can do, but anyways in terms of AI boom/bust the relevant question is whether this is a viable product. All sorts of consumer products could be improved by making them a lot more expensive, but there are cost/benefit limits to everything.

GPT-5 and Claude-4 will be interesting, assuming these are both pure transformer models (not COT), as they will be a measure how much benefit remains to be had from training set scaling. I'd expect gains will be more against narrow benchmarks, than in the overall feel of intelligence (LLM arena score?) one gets from the model.

replies(1): >>41899221 #
aurareturn ◴[] No.41899221[source]
I think OpenAI has already proven that it's a viable product. Their gross margins must be decent. I doubt they're making a loss for every token they inference.
replies(1): >>41899461 #
HarHarVeryFunny ◴[] No.41899461[source]
I don't think they've broken out O1 revenue, but it must be very small at the moment since it was only just introduced. Their O1-preview pricing doesn't seem to reflect the exponential compute cost, so perhaps it is not currently priced to be profitable. Overall, across all models and revenue streams, their revenue does exceed inference costs ($4B vs $2B), but they still are projected to lose $5B this year, $14B next year, and not make a profit until 2029 (and only then if they've increased revenue by 100x ...).

Training costs are killing them, and it's obviously not sustainable to keep spending more on research and training than the revenue generated. Training costs are expected to keep growing fast, while revenue per token in/out is plummeting - they need massive inference volume to turn this into a profitable business, and need to pray that this doesn't turn into a commodity business where they are not the low cost producer.

https://x.com/ayooshveda/status/1847352974831489321

https://x.com/Gloraaa_/status/1847872986260341224

replies(1): >>41900349 #
1. nl ◴[] No.41900349[source]
The thing is that OpenAI can choose to spend less on training at any time.

We've seen this before, with for example Amazon where they made a deliberate effort to avoid profitability by spending as much as possible on infrastructure until the revenue became some much they couldn't spend it.

Being in a position where you are highly cash-flow positive and it's strategic investment that is the cost seems like a good position.

replies(1): >>41903153 #
2. HarHarVeryFunny ◴[] No.41903153[source]
I don't know how you can compare Amazon vs OpenAI on the fundamentals of the two businesses. It's the difference in fundamentals that made Amazon a buy at absurd P/Es, as well as some degree of luck in AWS becoming so profitable, while OpenAI IMO seems much more of a dodgy value proposition.

Amazon were reinvesting and building scale, breadth and efficiency that has become an effective moat. How do you compete with Amazon Prime free delivery without your own delivery fleet, and how do you build that without the scale of operations?

OpenAI don't appear to have any moat, don't own their own datacenters, and the datacenters they are using are running on expensive NVIDIA chips. Compare to Google with their own datacenters and TPUs, Amazon with own datacenters and chips (Graviton), Meta with own datacenters (providing value to their core business) and chips - and giving away the product for free despite spending billions on it ... If this turns into the commodity business that it appears it may (all frontier models converging in performance), then OpenAI would seem to be in trouble.

Of course OpenAI could stop training at any time, but to extent that there is further performance to be had from further scaling and training, then they will be left behind by the likes of Meta who have a thriving core business to fund continued investment and are not dependent on revenue directly from AI.