Most active commenters
  • aurareturn(8)
  • marcus_holmes(3)

←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 46 comments | | HN request time: 0.002s | source | bottom
Show context
hn_throwaway_99 ◴[] No.41896346[source]
Reading this makes me willing to bet that this capital intensive investment boom will be similar to other enormous capital investment booms in US history, such as the laying of the railroads in the 1800s, the proliferation of car companies in the early 1900s, and the telecom fiber boom in the late 1900s. In all of these cases there was an enormous infrastructure (over) build out, followed by a crash where nearly all the companies in the industry ended up in bankruptcy, but then that original infrastructure build out had huge benefits for the economy and society as that infrastructure was "soaked up" in the subsequent years. E.g. think of all the telecom investment and subsequent bankruptcies in the late 90s/early 00s, but then all that dark fiber that was laid was eventually lit up and allowed for the explosion of high quality multimedia growth (e.g. Netflix and the like).

I think that will happen here. I think your average investor who's currently paying for all these advanced chips, data centers and energy supplies will walk away sorely disappointed, but this investment will yield huge dividends down the road. Heck, I think the energy investment alone will end up accelerating the switch away from fossil fuels, despite AI often being portrayed as a giant climate warming energy hog (which I'm not really disputing, but now that renewables are the cheapest form of energy, I believe this huge, well-funded demand will accelerate the growth of non-carbon energy sources).

replies(21): >>41896376 #>>41896426 #>>41896447 #>>41896726 #>>41898086 #>>41898206 #>>41898291 #>>41898436 #>>41898540 #>>41899659 #>>41900309 #>>41900633 #>>41903200 #>>41903363 #>>41903416 #>>41903838 #>>41903917 #>>41904566 #>>41905630 #>>41905809 #>>41906189 #
aurareturn ◴[] No.41896447[source]
I'm sure you are right. At some point, the bubble will crash.

The question remains is when the bubble will crash. We could be in the 1995 equivalent of the dotcom boom and not 1999. If so, we have 4 more years of high growth and even after the crash, the market will still be much bigger in 2029 than in 2024. Cisco was still 4x bigger in 2001 than in 1995.

One thing that is slightly different from past bubbles is that the more compute you have, the smarter and more capable AI.

One gauge I use to determine if we are still at the beginning of the boom is this: Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat? We don't have this yet - most likely because it's probably still too expensive to do this much inference with such high context window. We still need a lot more compute and better models.

Because of the above, I'm in the camp that believe we are actually closer to the beginning of the bubble than at the end.

Another thing I would watch closely to see when the bubble might pop is if LLM scaling laws are quickly breaking down and that more compute no longer yields more intelligence in an economical way. If so, I think the bubble would pop. All eyes are on GPT5-class models for signs.

replies(8): >>41896552 #>>41896790 #>>41898712 #>>41899018 #>>41899201 #>>41903550 #>>41904788 #>>41905320 #
1. vladgur ◴[] No.41896552[source]
Re: Slack chat:

Glean.com does it for the enterprise I work at: It consumes all of our knowledge sources including Slack, Google docs, wiki, source code and provides answers to complex specific questions in a way that’s downright magical.

I was converted into a believer when I described an issue to it, pointers to a source file in online git repo and it pointed me to another repository that my team did not own that controlled DNS configs that we were not aware about. These configs were the reason our code did not behave as we expected.

replies(4): >>41896575 #>>41896658 #>>41899040 #>>41901466 #
2. aurareturn ◴[] No.41896575[source]
Thanks. I didn't know that existed. But does it scale? Would it still work if large companies with many millions of Slack messages?

I suppose one reason Slack doesn't have a solution yet is because they're having a hard time getting it to work for large companies.

replies(2): >>41896647 #>>41896714 #
3. ◴[] No.41896647[source]
4. _huayra_ ◴[] No.41896658[source]
This is the main "killer feature" I've personally experienced from GPT things: a much better contextual "search engine-ish" tool for combing through and correlating different internal data sources (slack, wiki, jira, github branches, etc).

AI code assistants have been a net neutral for me (they get enough idioms in C++ slightly incorrect that I have to spend a lot of time just reading the generated code thoroughly), but being able to say "tell me what the timeline for feature X is" and have it comb through a bunch of internal docs / tickets / git commit messages, etc, and give me a coherent answer with links is amazing.

replies(3): >>41896682 #>>41898324 #>>41905687 #
5. aurareturn ◴[] No.41896682[source]
This is partly why I believe OS makers, Apple, Microsoft, Google, have a huge advantage in the future when it comes to LLMs.

They control the OS so they can combine and feed all your digital information to an LLM in a seamless way. However, in the very long term, I think their advantage will go away because at some point, LLMs could get so good that you don't need an OS like iOS anymore. An LLM could simply become standalone - and function without a traditional OS.

Therefore, I think the advantage for iOS, Android, Windows will increase in the next few years, but less powerful after that.

replies(3): >>41898863 #>>41902626 #>>41904320 #
6. hn_throwaway_99 ◴[] No.41896714[source]
Yeah, Glean does this and there are a bunch of other competitors that do it as well.

I think you may be confused about the length of the context window. These tools don't pull all of your Slack history into the context window. They use a RAG approach to index all of your content into a vector DB, then when you make a query only the relevant document snippets are pulled into the context window. It's similar for example to how Cursor implements repository-wide AI queries.

replies(1): >>41896734 #
7. aurareturn ◴[] No.41896734{3}[source]
I'm aware that one can't feed millions of messages into an LLM all at once. The only way to do this now is to use a RAG approach. But RAG approach has pros and cons and can miss crucial information. I think context window still matters a lot. The bigger the window, the more information you can feed in and the quality of answer should increase.

The point I'm trying to make is that increase context window will require more compute. Hence, we could still just be in the beginning of the compute/AI boom.

replies(1): >>41898924 #
8. aaronblohowiak ◴[] No.41898324[source]
>they get enough idioms in C++ slightly incorrect

this is part of why I stay in python when doing ai-assisted programming; there's so much training information out there for python and I _generally_ don't care about if its slightly off-idiom, its still probably fine.

replies(1): >>41900097 #
9. thwarted ◴[] No.41898863{3}[source]
An LLM is an application that runs on an operating system like any other application. That the vendor of the operating system has tied it to the operating system is purely a marketing/force-it-onto-your-device/force-it-in-front-of-your-face play. It's forced bundling, just like Microsoft did with Internet Explorer 20 years ago.
replies(1): >>41899134 #
10. reissbaker ◴[] No.41898924{4}[source]
We might be even earlier — the 90s was a famous boom with a fast bust, but to me this feels closer to the dawn of the personal computer in the late 70s and early 80s: we can automate things now that were impossible to automate before. We might have a long time before seeing diminishing returns.
11. mvdtnz ◴[] No.41899040[source]
My workpalce uses Glean and since it was connected to Slack it has become significantly worse. It routinely gives incorrect or VERY incomplete information, misattributes work to developers who may have casually mentioned a project at some time and worst of all presents jokes or sarcastic responses as fact.

Not only is it an extremely poor source of information, it has ruined the company's Slack culture as people are no longer willing to (for lack of a better term) shitpost knowing that their goofy sarcasm will now be presented to Glean users as fact.

replies(2): >>41899457 #>>41906216 #
12. aurareturn ◴[] No.41899134{4}[source]
I predict that OpenAI will try to circumvent iOS and Android by making their own device. I think it will be similar to Rabbit R1, but not a scam, and a lot more capable.

They recently hired Jony Ive on a project - it could be this.

I think it'll be a long term goal - maybe in 3-4 years, a device similar to the Rabbit R1 would be viable. It's far too early right now.

replies(5): >>41900088 #>>41900923 #>>41900926 #>>41902275 #>>41903291 #
13. dcsan ◴[] No.41899457[source]
Maybe have some off limits to glean shit posting channels?
14. marcus_holmes ◴[] No.41900088{5}[source]
Even if this is true (and I'm not saying it's not), they probably won't create their own OS. They'd be smarter to do what Apple did and clone a BSD (or similar) rather than start afresh.
replies(2): >>41901274 #>>41903781 #
15. ryandrake ◴[] No.41900097{3}[source]
Yea, I was thumbs-down on ai-assisted programming because when I tested it out, I tried it by adding things to my existing C and C++ projects, and its suggestions were... kind of wild. Then, a few months later I gave it another chance when I was writing some Python and was impressed. Finally, I used it on a new-from-blank-text-file Rust project and was pretty much blown away.
replies(4): >>41900253 #>>41900255 #>>41900878 #>>41901107 #
16. ffujdefvjg ◴[] No.41900253{4}[source]
As someone who doesn't generally program, it was pretty good at getting me an init.lua set up for nvim with a bunch of plugins and some functions that would have taken me ages to do by hand. That said...it still took a day or two of working with it and troubleshooting everything, and while it's been reliable so far, I worry that it's not exactly idiomatic. I don't know enough to really say.

What it's really good at is taking my description of something and pointing me in the right direction to do my own research.

(two things that helped me with getting decent code were to describe the problem and desired solution, followed by a "Does that make sense?". This seems to get it to restate the problem itself and produce better solutions. The other thing was to copy the output into a fresh session, ask for a description of what the code does and what improvements could be made)

replies(2): >>41900331 #>>41900420 #
17. rayxi271828 ◴[] No.41900255{4}[source]
Wouldn't AI be worse at Rust than at C++ given the amount of code available in the respective languages?
replies(1): >>41900497 #
18. skydhash ◴[] No.41900331{5}[source]
Not saying that it’s a better way, but I started with vim by copying someone conf (on Github), removing all extraneous stuff, then slowly familiarizing myself with the rest. Then it was a matter of reading the docs when I wanted some configuration. I believe the first part is faster than dealing with an LLM, especially when dealing with an unfamiliar software.
replies(1): >>41900535 #
19. komali2 ◴[] No.41900420{5}[source]
The downside of this nvim solution is the same downside as both pasting big blobs of ai code into a repo, and, pasting big vim configs you find online into your vimrc: inability to explain the pasted code.

When you need something fast for whatever reason sure, but later when you want to tweak or add something, you'll have to finally sit down and learn basically the whole thing or at least a major part of it to do so anyway. Imo it's better to do that from the start but sometimes that's not always ideal.

replies(1): >>41901480 #
20. reverius42 ◴[] No.41900497{5}[source]
Maybe this is a case where more training data isn’t better. There is probably a lot of bad/old C++ out there in addition to new/modern C++, compared to Rust which is relatively all modern.
replies(1): >>41904660 #
21. ffujdefvjg ◴[] No.41900535{6}[source]
I agree with this approach generally, but I needed to use some lua plugins to do something specific fairly quickly, and didn't feel like messing around with it for weeks on end to get it just right.
22. _huayra_ ◴[] No.41900878{4}[source]
The best I have ever seen were obscure languages with very strong type safety. Some researcher at a sibling org to my own told me to try it with the Lean language, and it basically gave flawless suggestions.

I'm guessing this is because the only training material was blogs from uber-nerdy CS researchers on a language where "mistakes" are basically impossible to write, and not a bunch of people flailing on forums asking about hello world-ish stuff and segfaulting examples.

23. ◴[] No.41900923{5}[source]
24. tightbookkeeper ◴[] No.41900926{5}[source]
I’m not even sure if they can make a website that takes text input to an executable and dumps the output.
25. fragmede ◴[] No.41901107{4}[source]
My data science friend tells me it's really good at writing bad pandas code because it's seen so much bad pandas code.

At the end of the day, it depends where you are in the hierarchy. Having it write code for me on a hobby project in react that's bad but works is one thing. I'm having a lot of fun with that. Having it write bad code for me professionally is another thing though. Either way, there's no going back to before ChatGPT, just like there's no going back to before Stack Overflow or Google. Or the Internet.

26. aurareturn ◴[] No.41901274{6}[source]
The LLM would become the OS.
replies(2): >>41901726 #>>41902566 #
27. sofixa ◴[] No.41901466[source]
> Glean.com does it for the enterprise I work at: It consumes all of our knowledge sources including Slack, Google docs, wiki, source code and provides answers to complex specific questions in a way that’s downright magical

There are a few other companies in this space (and it's not something that complex to DIY either); the issue is data quality. If your Google Docs and wikis contain obsolete information (because nobody updated them), it's just going to be shit in, shit out. Curating the input data is the challenging part.

28. shwaj ◴[] No.41901480{6}[source]
When I’ve used AI for writing shell scripts it used a lot of syntax that I couldn’t understand. So then I took the time to ask it to walk me through the parts that I didn’t understand. This took longer than blindly pasting what it generated, but still less time than it would have using search to learn to write my own script. With search, a lot of time is spent guessing the right search term. With chat, assuming it generated a reasonable answer (I know: a big assumption!), my follow-up questions can directly reference aspects of the generated code.
replies(1): >>41902311 #
29. marcus_holmes ◴[] No.41901726{7}[source]
An LLM cannot "become" an OS. It can have an OS added to it, for sure, but that's a different thing. LLMs run on top of a software stack that runs on top of an OS. Incorporating that whole stack into a single binary does not mean it "becomes" an OS.

And the point stands: you would not write a new OS, even to incorporate it into your LLM. You'd clone a BSD (or similar) and start there.

replies(1): >>41902470 #
30. vrighter ◴[] No.41902275{5}[source]
even then, the llm cannot possibly be a standalone os. For one thing, it cannot execute loops. So even something as simple as enumerating hardware at startup is impossible.
31. vrighter ◴[] No.41902311{7}[source]
having something explained to me has never helped me retain the information. That only happens if i spend the time actually figuring out stuff myself.
32. aurareturn ◴[] No.41902470{8}[source]
I don't think you're getting the main point. The only application that this physical device would run is ChatGPT (or some successor). You won't be able to install other apps on it like a normal OS. Everything you do is inside this LLM.

Underneath, it can be Linux, BSD, Unix, or nothing at all, whatever. It doesn't matter. That's not important.

OS was just a convenient phrase to describe this idea.

replies(2): >>41903237 #>>41910102 #
33. glimshe ◴[] No.41902566{7}[source]
The LLM can't abstract PCI, USB, SATA etc from itself.
replies(1): >>41904964 #
34. matthewdgreen ◴[] No.41902626{3}[source]
I cannot tell you how much this echoes what people were saying during the dot com days :) Of course back then it was browsers and not LLMs. Looking back, people were both correct about this, yet we’re still having the same conversation about replacing the OS cartel.
35. guitarlimeo ◴[] No.41903237{9}[source]
I got your main point from the first message, but still don't like redefining terminology like OS to mean what you did.
replies(2): >>41904202 #>>41904933 #
36. simonh ◴[] No.41903291{5}[source]
This is a similar situation to the view that the web would replace operating systems. All we'd need is a browser.

I don't think AI is ultimately even an application, it's a feature we will use in applications.

replies(1): >>41904880 #
37. whywhywhywhy ◴[] No.41903781{6}[source]
Would be extremely surprising if it were anything other than an Android fork. The differentiator is gonna be the LLM, always on listening and the physical interface to it.

You're just burning money bothering to rewrite the rest of the stack when off the shelf will save you years.

38. aurareturn ◴[] No.41904202{10}[source]
Think of iOS and everything that it does such as downloading apps, opening apps, etc. Replace all of that with ChatGPT.

No need to get to the technicals such as whether it's UNIX or Linux talking to the hardware.

Just from a pure user experience standpoint, OpenAI would become iOS.

39. dash2 ◴[] No.41904320{3}[source]
Good comment. From Apple's point of view, AI could be a disruptive innovation: they've spent billions making extremely user-friendly interfaces, but that could become irrelevant if I can just ask my device questions.

But I think there will be a long period when people want both the traditional UI with buttons and sliders, and the AI that can do what you ask. (Analogy with phone keyboards where you can either speech-to-text, or slide to type, or type individual letters, or mix all three.)

40. ryandrake ◴[] No.41904660{6}[source]
Yes, I think that's it. There is a lot of horrible C++ code out there, especially on StackOverflow where "this compiled for me" sometimes ends up being the accepted answer. There are also a lot of ways to use C++ poorly/wrong without even knowing it.
41. gpderetta ◴[] No.41904880{6}[source]
> This is a similar situation to the view that the web would replace operating systems. All we'd need is a browser.

well, that's not a false statement. As much as I might dislike it, the raise of the web and web applications have made the OS themselves irrelevant for a significant number for tasks.

42. ogogmad ◴[] No.41904933{10}[source]
I don't think "OS" means anything definitive. It's not 1960. Nowadays, it's a thousand separate things stuck together.
43. ogogmad ◴[] No.41904964{8}[source]
What counts as an OS is subjective. The concept has always been a growing snowball.
44. SoftTalker ◴[] No.41905687[source]
Companies are going to have to do a lot less gatekeeping and siloing of data for this to really work. The companies that are totally transparent even internally are few and far between in my experience.
45. rendang ◴[] No.41906216[source]
Interesting. I still find it to be a net positive, but it is amusing when I ask it about a project and the source cited is a Slack thread I wrote 2 days prior
46. marcus_holmes ◴[] No.41910102{9}[source]
I think what you mean is "Desktop" not "OS". You're just replacing all the windows, menus and buttons with a chat interface.