Most active commenters
  • johnnyanmac(26)
  • aurareturn(19)
  • (14)
  • tim333(9)
  • _heimdall(9)
  • n_ary(6)
  • Melting_Harps(5)
  • rsynnott(5)
  • HarHarVeryFunny(5)
  • FeepingCreature(5)

The AI Investment Boom

(www.apricitas.io)
270 points m-hodges | 386 comments | | HN request time: 5.16s | source | bottom
1. apwell23 ◴[] No.41896263[source]
> AI products are used ubiquitously to generate code, text, and images, analyze data, automate tasks, enhance online platforms, and much, much, much more—with usage expected only to increase going forward.

Why does every hype article start with this. Personally my copilot usage has gone down while coding. I tried and tried but it always gets lost and starts spitting out subtle bugs that takes me more time to debug than if i had written it myself.

I always have this feeling of 'this might fail in production in unknown ways' because i might have missed checking the code throughly . I know i am not the only one, my coworkers and friends have expressed similar feelings.

I even tried the new 'chain of thought' model, which for some reason seems to be even worse.

replies(10): >>41896295 #>>41896310 #>>41896325 #>>41896327 #>>41896363 #>>41896380 #>>41896400 #>>41896497 #>>41896670 #>>41898703 #
2. uxhacker ◴[] No.41896287[source]
It feels like there’s a big gap in this discussion. The focus is almost entirely on GPU and hardware investment, which is undeniably driving a lot of the current AI boom. But what’s missing is any mention of the software side and the significant VC investment going into AI-driven platforms, tools, and applications. This story could be more accurately titled ‘The GPU Investment Boom’ given how heavily it leans into the hardware conversation. Software investment deserves equal attention
replies(5): >>41896362 #>>41896430 #>>41896473 #>>41896523 #>>41904605 #
3. bongodongobob ◴[] No.41896295[source]
Well I have the exact opposite experience. I don't know why people struggle to get good results with llms.
replies(4): >>41896332 #>>41896335 #>>41896492 #>>41897988 #
4. sksxihve ◴[] No.41896310[source]
Because they all use AI to write the articles.
replies(3): >>41898037 #>>41898327 #>>41904296 #
5. bugbuddy ◴[] No.41896325[source]
This just reminded I forgot I have had a Copilot subscription. It has not made any useful code suggestions in months to the point of fading from my memory. I just logged in to cancel it. Now, I need to check my other subscriptions that I can cancel or reduce to a lower tier.
replies(1): >>41900775 #
6. falcor84 ◴[] No.41896327[source]
From my experience, it is getting better over time, and I believe that there's still a lot of relatively low hanging fruit, particularly in terms of integrating the LLM with the language server protocol and other tooling. But having said that, at this point in time, it's just not good enough for independent work, so I would suggest using it only as you would pair-program with a mid-level human developer who doesn't have much context on the project, and has a short attention span. In particular, I generally only have the AI help me with one function/refactoring at a time, and in a way that is easy for me to test as we go, and am finding immense value.
replies(3): >>41896513 #>>41898284 #>>41900997 #
7. thuuuomas ◴[] No.41896332{3}[source]
Would you feel comfortable pushing generated code to production unaudited?
replies(2): >>41896359 #>>41896360 #
8. hnthrowaway6543 ◴[] No.41896335{3}[source]
LLMs are great for simple, common tasks, i.e. CRUD apps, RESTful web endpoints, unit tests, for which there's an enormous amount of examples and not much unique complexity. There's a lot of developers whose day mostly involves these repetitive, simple tasks. There's also a lot of developers who work on things that are a lot more niche and complicated, where LLMs don't provide much help.
replies(3): >>41896464 #>>41896611 #>>41896681 #
9. hn_throwaway_99 ◴[] No.41896346[source]
Reading this makes me willing to bet that this capital intensive investment boom will be similar to other enormous capital investment booms in US history, such as the laying of the railroads in the 1800s, the proliferation of car companies in the early 1900s, and the telecom fiber boom in the late 1900s. In all of these cases there was an enormous infrastructure (over) build out, followed by a crash where nearly all the companies in the industry ended up in bankruptcy, but then that original infrastructure build out had huge benefits for the economy and society as that infrastructure was "soaked up" in the subsequent years. E.g. think of all the telecom investment and subsequent bankruptcies in the late 90s/early 00s, but then all that dark fiber that was laid was eventually lit up and allowed for the explosion of high quality multimedia growth (e.g. Netflix and the like).

I think that will happen here. I think your average investor who's currently paying for all these advanced chips, data centers and energy supplies will walk away sorely disappointed, but this investment will yield huge dividends down the road. Heck, I think the energy investment alone will end up accelerating the switch away from fossil fuels, despite AI often being portrayed as a giant climate warming energy hog (which I'm not really disputing, but now that renewables are the cheapest form of energy, I believe this huge, well-funded demand will accelerate the growth of non-carbon energy sources).

replies(21): >>41896376 #>>41896426 #>>41896447 #>>41896726 #>>41898086 #>>41898206 #>>41898291 #>>41898436 #>>41898540 #>>41899659 #>>41900309 #>>41900633 #>>41903200 #>>41903363 #>>41903416 #>>41903838 #>>41903917 #>>41904566 #>>41905630 #>>41905809 #>>41906189 #
10. bongodongobob ◴[] No.41896359{4}[source]
Would you feel comfortable pushing human code to production unaudited?
replies(3): >>41896393 #>>41896438 #>>41904561 #
11. charrondev ◴[] No.41896360{4}[source]
For my I have a company subscription for Copilot and I just use the line based autocomplete. It’s mildly better than the built in autocomplete. I never have it do more than though and probably wouldn’t buy a license for myself.
12. pfisherman ◴[] No.41896362[source]
Also they talk a lot about data centers and cloud compute, but do not mention chips for on decide inference.

Given where mobile sits in the hierarchy of interfaces, that is where I would be placing my bets if I were a VC.

replies(1): >>41896469 #
13. drowsspa ◴[] No.41896363[source]
Yeah, it's actually frustrating that even when writing Go code, which is statically typed, it keeps messing up the arguments order. That would seem to me a pretty easy thing to generate.

Although it's much better when writing standard REST and gRPC APIs

14. candiddevmike ◴[] No.41896376[source]
What will be the advantage of having a bunch of obsolete hardware? All I see is more e-waste.
replies(8): >>41896440 #>>41896450 #>>41896455 #>>41896456 #>>41896471 #>>41896532 #>>41896645 #>>41899790 #
15. righthand ◴[] No.41896380[source]
I see the same results as my TabNine+Template generator+language server as I do with things like CoPilot. I get TabNine issues when the code base isn’t huge. I think also tossing away language servers and template generators for just LLM will just lead to seeking “proper predictive path”. Most of the time the LLM will spit out the create-express/react-template for you, when you ask it to customize it will guess using the most common patterns. Do you need something to guess for you?

It’s also getting worse because people are poisoning the well.

16. candiddevmike ◴[] No.41896393{5}[source]
Only on Fridays before a three day weekend.
17. badgersnake ◴[] No.41896400[source]
It’s kinda true though. They are increasingly used for those things. Sure, the results are terrible and doing it without AI almost always yields better results but that doesn’t seem to stop people.

Look at this nonsense for example: https://intouch.family/en

replies(3): >>41896543 #>>41898392 #>>41902344 #
18. shortrounddev2 ◴[] No.41896414[source]
I can't wait for the AI bubble to be over so HN can talk about something else
replies(5): >>41896427 #>>41896448 #>>41897941 #>>41898094 #>>41899595 #
19. bwanab ◴[] No.41896426[source]
While I agree with your essential conclusions, I don't think the automobile companies really fit. Many of the early 1900s companies (e.g. Ford, GM, Mercedes, even Chrysler) are still among the largest auto companies in the world.
replies(5): >>41896441 #>>41896540 #>>41896563 #>>41896580 #>>41897155 #
20. throwaway314155 ◴[] No.41896427[source]
I will take AI bubble over crypto bubble any day of the week.
replies(3): >>41896539 #>>41898343 #>>41901022 #
21. ◴[] No.41896430[source]
22. SubiculumCode ◴[] No.41896432[source]
Question:It seems like the requirements needed to become an investor in a startup is a multimillion pool of money...which is not what a salaried professional has at their disposal. Yet, a lot of the 100x opportunities come before public offerings, making a whole class of investment unavailable to modest but knowledgeable professionals. I know that sometimes, through personal connections, pools of investment money are built across individuals, but as a public service it seems like, no, it is not.
23. dijksterhuis ◴[] No.41896438{5}[source]
depends on the human.

but i would never push llm generated code. never.

-

edit to add some substance:

if it’s someone who

* does a lot of manual local testing

* adds good unit / integration tests

* writes clear and well documented PRs

* knows the code style, and when to break it

* tests themselves in a staging environment, independent of any QA team or reviews

* monitors the changes after they’ve gone out

* has repeatedly found things in their own PRs and asked to hold off release to fix them

* is reviewing other people’s PRs and spotting things before they go out

yea, sure, i’ll release the changes. they’re doing the auditing work for me.

they clearly care about the software. and i’ve seen enough to trust them.

and if they got it wrong, well, shit, they did everything good enough. i’m sure they’ll be on the ball when it comes to rolling it back and/or fixing it.

an llm does not do those things. an llm *does not care about your software* and never will.

i’ll take people who give a shit any day of the week.

replies(1): >>41896687 #
24. wslh ◴[] No.41896440{3}[source]
I don't think the parent was specifically referring to hardware alone. The 'rails' in this context are also the AI algorithms and the underlying software. New research and development could lead to breakthroughs that allow us to use significantly less hardware than we currently do. Just as the dot-com crash wasn’t solely about the physical infrastructure but also about protocols like HTTP, I believe the AI boom will involve advancements beyond just hardware. There may be short-term excess, but the long-term benefits, particularly on the software side, could be immense.
replies(1): >>41903794 #
25. throwaway20222 ◴[] No.41896441{3}[source]
There were hundreds of failed automotive companies and parts suppliers though. I think the argument is that many will die, some will survive and take all (most)
replies(2): >>41896761 #>>41905766 #
26. aurareturn ◴[] No.41896447[source]
I'm sure you are right. At some point, the bubble will crash.

The question remains is when the bubble will crash. We could be in the 1995 equivalent of the dotcom boom and not 1999. If so, we have 4 more years of high growth and even after the crash, the market will still be much bigger in 2029 than in 2024. Cisco was still 4x bigger in 2001 than in 1995.

One thing that is slightly different from past bubbles is that the more compute you have, the smarter and more capable AI.

One gauge I use to determine if we are still at the beginning of the boom is this: Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat? We don't have this yet - most likely because it's probably still too expensive to do this much inference with such high context window. We still need a lot more compute and better models.

Because of the above, I'm in the camp that believe we are actually closer to the beginning of the bubble than at the end.

Another thing I would watch closely to see when the bubble might pop is if LLM scaling laws are quickly breaking down and that more compute no longer yields more intelligence in an economical way. If so, I think the bubble would pop. All eyes are on GPT5-class models for signs.

replies(8): >>41896552 #>>41896790 #>>41898712 #>>41899018 #>>41899201 #>>41903550 #>>41904788 #>>41905320 #
27. bugbuddy ◴[] No.41896448[source]
I think it will burst when the Fed realizes inflation is not done and start raising again in 6 months. They can only feed the bubble for so long before the common people have had enough of rising prices.
replies(1): >>41896516 #
28. iTokio ◴[] No.41896450{3}[source]
Well, at least they are paving the way to more efficient hardware, GPU are way, way more energy efficient than CPU and the parallel architectures are the only remaining way to scale compute.

But yes, a lot of energy wasted in the growing phase.

replies(2): >>41896551 #>>41897866 #
29. goda90 ◴[] No.41896455{3}[source]
Even if the hardware quickly becomes outdated, I'm not sure it'll become worthless so quickly. And there's also the infrastructure of the data center and new electricity generation to power them. Another thing that might survive a crash and carry on to help the future is all the code used to support valuable use cases.
30. almost_usual ◴[] No.41896456{3}[source]
In the case of dark fiber the hardware was fine, wavelength division multiplexing was created which increased capacity by 100x in some cases crashing demand for new fiber.

I think OP is suggesting AI algorithms and training methods will be improve resulting in enormous performance gains with existing hardware causing a similar surplus of infrastructure and crash in demand.

replies(1): >>41896626 #
31. 101008 ◴[] No.41896464{4}[source]
Yeah, exactly this. If I ask Cursor to write the serializer for a new Django model, it does it (although sometimes it invents fields that do not exist). It saves me 2 minute.

When I ask him to write a function that should do something much more complex, it usually do something so bad it takes me more time because it confuses me and now I have to back to my original reasoning (after trying to understand what it did).

What I found useful is to ask him to explain me what a function does in a new codebase I am exploring, although I have to be very careful because a lot of time invents or skips steps that are crucial.

replies(1): >>41896590 #
32. ◴[] No.41896469{3}[source]
33. CamperBob2 ◴[] No.41896471{3}[source]
Do you expect better hardware to suddenly start appearing on the market, fully-formed from the brow of Zeus?
34. _delirium ◴[] No.41896473[source]
Is there a good estimate of the split? My impression from AI startups whose operations I know something about is that a majority of their VC raise is currently going back into paying for hardware directly or indirectly, even though they aren’t hardware startups per se, but I don’t have any solid numbers.
replies(1): >>41898957 #
35. amonith ◴[] No.41896492{3}[source]
Seriously though, what are you doing? Every single example everywhere throughout the internet that tries to show how good AI is at programming shows so mindbogglingly simplistic examples that it's getting annoying. It sure is a great learning tool when you're trying to do something experimental in a new stack or completely new project, I'll give you that, but once you reach the skill level where someone would hire you to be an X developer (which most developers disagreeing with you are, mid+ developers of some stack X) the thing becomes a barely useful autocomplete. Maybe that's the problem? It's just not a tool for professional developers?
replies(3): >>41896542 #>>41897047 #>>41898131 #
36. ◴[] No.41896497[source]
37. ◴[] No.41896513{3}[source]
38. almost_usual ◴[] No.41896516{3}[source]
The Fed raising rates will increase inflation at this point (and further increase fiscal deficits), nothing stops that train.

Arguably if the investment here works out we’ll see deflation through extreme technical advancements.

replies(1): >>41896646 #
39. aurareturn ◴[] No.41896523[source]
I think GPUs and datacenter is to AI is what fiber was to the dotcom boom.

A lot of LLM based software is uneconomical because we don't have enough compute and electricity for what they're trying to do.

replies(2): >>41896674 #>>41900192 #
40. bee_rider ◴[] No.41896532{3}[source]
Maybe the bust will be so rough that TSMC will go out of business, and then these graphics cards will not go obsolete for quite a while.

Like Intel and Samsung might make a handful of better chips or whatever, but neither of their business models really involve being TSMC. So if the bubble pop took out TSMC, there wouldn’t be a new TSMC for a while.

41. Melting_Harps ◴[] No.41896539{3}[source]
> I will take AI bubble over crypto bubble any day of the week.

I've been in the former since '21 and have seen every single cycle since 2011 in the latter, I can assure their is more dumb money in the former than in the latter; (just by scale alone) at least in the latter whether it was ICO or NFTs or whatever mal-investment was promptly punished (rugpulls/exit scams) companies like INTEL get to stay in Zombie mode because of the corpo-welfare that the US doles out while shaming everyone else to be prudent with their investments while these corps and banks spend like drunken sailors and tries to strangle the former out of existence (rightly so in most cases as most crypto is a total scam).

With that said, what you will see emerge are some incredibly established players in both fields that will have the staying power to change how the Industry is shaped around them: Nvidia and Bitcoin are comparable to one another in that respect.

Both have/had crazy volatility but the staying power and just the fact they remain firmly at the center of both Industry's is rather telling that you simply don't see what these technologies offer because of the hype and boom and bust cycles.

As a person who directly benefits from this: I can assure you most of these VCs are exit liquidity just as most foolish people are for the alt scams of yore, execpt the US economy (likely all of the Western World at this point) isn't entirely reliant on the promise of vapourware with 'crypto' in any capacity, weheras the same cannot be said about the theatrics of Jensen's Nvidia.

Source: I build data center infrastructure for these mega corps doing 'AI' and I'm doing an MSc in CS (Big Data) and been a Bitcoiner since Satoshi was still on BTF.

replies(1): >>41896579 #
42. cloud_hacker ◴[] No.41896540{3}[source]
> While I agree with your essential conclusions, I don't think the automobile companies really fit. Many of the early 1900s companies (e.g. Ford, GM, Mercedes, even Chrysler) are still among the largest auto companies in the world.

American Automative filled for Bankruptcy multiple times.

American Government had to step in to back them up and bail them out.

replies(2): >>41903883 #>>41905774 #
43. ◴[] No.41896542{4}[source]
44. anon7725 ◴[] No.41896543{3}[source]
That’s one of the saddest bits of AI enshittification yet.
replies(2): >>41902433 #>>41904441 #
45. dartos ◴[] No.41896551{4}[source]
GPUs are different than CPUs.

They’re way more efficient at matmuls, but start throwing branching logic at them and they slow down a lot.

Literally a percentage of their cores will noop while others are executing a branch, since all cores are lockstep.

46. vladgur ◴[] No.41896552{3}[source]
Re: Slack chat:

Glean.com does it for the enterprise I work at: It consumes all of our knowledge sources including Slack, Google docs, wiki, source code and provides answers to complex specific questions in a way that’s downright magical.

I was converted into a believer when I described an issue to it, pointers to a source file in online git repo and it pointed me to another repository that my team did not own that controlled DNS configs that we were not aware about. These configs were the reason our code did not behave as we expected.

replies(4): >>41896575 #>>41896658 #>>41899040 #>>41901466 #
47. nemo44x ◴[] No.41896563{3}[source]
That phase is called consolidation. It’s part of the cycle. The speculative over leveraged and mismanaged companies get merged into the winners or disappear if they have nothing of value.
48. jimmySixDOF ◴[] No.41896566[source]
I remember listening to Dr Robert Martin who was leading Bell Labs in the late 90s and he talked about how the bandwidth capacity was pushing to infinity while cost per bit was pushing towards zero and we all know how that ended for the optical capacity builders of that time before the bubble popped. Is there a case for intelligence being inexhaustible to demand ? Is there a case for, as Sama says, the cost for intelligence as in input to a system converges with the price of electricity needed to power the gpu behind it in the daya center? Yes and yes. Still the same could be said for a bit of bandwidth.
replies(2): >>41898426 #>>41899019 #
49. aurareturn ◴[] No.41896575{4}[source]
Thanks. I didn't know that existed. But does it scale? Would it still work if large companies with many millions of Slack messages?

I suppose one reason Slack doesn't have a solution yet is because they're having a hard time getting it to work for large companies.

replies(2): >>41896647 #>>41896714 #
50. ◴[] No.41896580{3}[source]
51. throwaway314155 ◴[] No.41896579{4}[source]
You might be right, but the crypto people were/are basically religious in their responses. I see a few loons for LLM's talking about how AGI is near, and of course there's the EA/LessWrong people talking about doomsday. But none of them were as staunchly dug in _and_ misinformed as the crypto folks.

edit: if it isn't clear I'm a staunch opponent of cryptocurrency in any form.

replies(3): >>41896615 #>>41896638 #>>41896728 #
52. dartos ◴[] No.41896590{5}[source]
See, I recently picked up the Ash framework for elixir and it does all that too, but in a declarative, precise language which codegens the implementation in a deterministic way.

It just does the job that cursor does there, but better.

Maybe us programmers should focus on making higher order programming tools instead of black box text generators for existing tools.

53. danenania ◴[] No.41896611{4}[source]
In my experience this underrates them. They can do pretty complex tasks that go well beyond your examples if prompted correctly.

The real limiting factor is not so much task complexity as the level of abstraction and indirection. If you have code that requires following a long chain of references to understand, LLMs will struggle to work with it.

For similar reasons, they also struggle with:

- generic types

- inheritance hierarchies

- long function call chains

- dependency injection

- deeply nested structures

They're also bad at counting, which can be an issue when dealing with concurrency—i.e. you started 5 operations concurrently at different points in your program and now need to block while waiting for 5 corresponding success or failure messages. Unless your code explicitly uses the number 5 somewhere, an LLM is often going to fail at counting the operations.

All in all, the main question I think in determining how well an LLM can do a task is whether the limiting factor for your task is knowledge or abstraction. If it's knowledge (the intricacies of some arcane OS API, for example), an LLM can do very well with good prompting even on quite large and complex tasks. If it's abstraction, it's likely to fail in all kinds of seemingly obvious ways.

replies(1): >>41899607 #
54. ◴[] No.41896615{5}[source]
55. llamaimperative ◴[] No.41896626{4}[source]
How much of current venture spending is going into reusable R&D that can be moved forward in time the way that physical infrastructure in their examples were able to be used in the future?
replies(1): >>41900123 #
56. Melting_Harps ◴[] No.41896638{5}[source]
> edit: if it isn't clear I'm a staunch opponent of cryptocurrency in any form.

It's very clear, but your exposure to zealots, on both sides, shouldn't deter you from being objective and seeing what these technologies actually offer. Hence why i wrote that part in the 2nd to last paragraph.

HN has such misinfored vitriol for any technology it didn't ordain itself, you sem to be of that cohort, what's odd is that very same people who gave you VC.SV funded startup land are all major backers of this technology.

I can just summarize this in one phrase: you seem to collectively not know, what you don't know and make leaps in logic and misinformed judgments from that POV.

replies(1): >>41896730 #
57. aurareturn ◴[] No.41896645{3}[source]
>What will be the advantage of having a bunch of obsolete hardware? All I see is more e-waste.

The energy build out, data centers are not wasted. You can swap out A100 GPUs for BH200 GPUs in the same datacenter. A100s will be 5 years old when Blackwell is out - which is just about right for how long datacenter chips are expected to last.

I do, however, think that the industry will move to newer hardware faster to try to squeeze as much efficiency as possible due to the energy bottleneck. Therefore, I expect TSMC's N2 nodes to have huge demand. In fact, TSMC themselves have said designs for N2 far outnumber N3 at the same stage of the node. This is most likely due to AI companies that want to increase efficiency due to the lack of electricity.

58. bugbuddy ◴[] No.41896646{4}[source]
No, raising rate would bring the economy to a slower pace and reduce private sector consumer demand. Private sector investment can continue to increase but at some point that too will hit a brick wall. The public sector spending depends on which type of big ego people get to make decisions. Given the extreme excesses so far it can go either way. The now extinct fiscal conservatives might just make a return finally but don’t hold your breath.
replies(2): >>41896717 #>>41904606 #
59. ◴[] No.41896647{5}[source]
60. _huayra_ ◴[] No.41896658{4}[source]
This is the main "killer feature" I've personally experienced from GPT things: a much better contextual "search engine-ish" tool for combing through and correlating different internal data sources (slack, wiki, jira, github branches, etc).

AI code assistants have been a net neutral for me (they get enough idioms in C++ slightly incorrect that I have to spend a lot of time just reading the generated code thoroughly), but being able to say "tell me what the timeline for feature X is" and have it comb through a bunch of internal docs / tickets / git commit messages, etc, and give me a coherent answer with links is amazing.

replies(3): >>41896682 #>>41898324 #>>41905687 #
61. osigurdson ◴[] No.41896670[source]
My feeling is (current) AI is more of a teacher than an implementor. It really does help when learning about something new or to give you ideas about directions to take. The actual code however still needs to be written by humans for the most part it seems.

AI is a great tool and does speed things up massively, it just doesn't align with the magical thought that we provide the ideas and AI does all of the grunt work. In general, always better to form mental models about things based on actual evidence as opposed to fantasy (and there is a lot of fantasy involved at the moment). This doesn't mean being pessimistic about potential future advancements however. It is just very hard to predict what the shape of those improvements will be.

62. bee_rider ◴[] No.41896674{3}[source]
The actual physical fiber was useful after the companies popped though.

GPUs are different, unless things go very poorly, these GPUs should be pretty much obsolete after 10 years.

The ecosystem for GPGPU software and the ability to design and manufacture new GPUs might be like fiber. But that is different because it doesn’t become a useful thing at rest, it only works while Nvidia (or some successor) is still running.

I do think that ecosystem will stick around. Whatever the next thing after AI is, I bet Nvidia has a good enough stack at this point to pivot to it. They are the vendor for these high-throughput devices: CPU vendors will never keep up with their ability to just go wider, and coders are good enough nowadays to not need the crutch of lower latency that CPUs provide (well actually we just call frameworks written by cleverer people, but borrowing smarts is a form of cleverness).

But we do need somebody to keep releasing new versions of CUDA.

replies(2): >>41896722 #>>41903351 #
63. apwell23 ◴[] No.41896681{4}[source]
> LLMs are great for simple, common tasks, i.e. CRUD apps, RESTful web endpoints

i gave it a yaml and asked it to generate a json call to rest api . It missed a bunch of keys and made up a random new key. I threw out the whole thing and did it with awk/sed.

64. aurareturn ◴[] No.41896682{5}[source]
This is partly why I believe OS makers, Apple, Microsoft, Google, have a huge advantage in the future when it comes to LLMs.

They control the OS so they can combine and feed all your digital information to an LLM in a seamless way. However, in the very long term, I think their advantage will go away because at some point, LLMs could get so good that you don't need an OS like iOS anymore. An LLM could simply become standalone - and function without a traditional OS.

Therefore, I think the advantage for iOS, Android, Windows will increase in the next few years, but less powerful after that.

replies(3): >>41898863 #>>41902626 #>>41904320 #
65. amonith ◴[] No.41896687{6}[source]
I'd say it depends more on "the production" than the human. There are legal means to hold all people accountable for their actions ("Gross neglience" and all that). So you can basically always trust that people will fix what they messed up given the possibility. So if you can afford for the production to be broken (e.g. the downtime will just annoy some people) you might as well allow your team to deploy straight to prod without audits. It's not that rare actually.
66. hn_throwaway_99 ◴[] No.41896714{5}[source]
Yeah, Glean does this and there are a bunch of other competitors that do it as well.

I think you may be confused about the length of the context window. These tools don't pull all of your Slack history into the context window. They use a RAG approach to index all of your content into a vector DB, then when you make a query only the relevant document snippets are pulled into the context window. It's similar for example to how Cursor implements repository-wide AI queries.

replies(1): >>41896734 #
67. almost_usual ◴[] No.41896717{5}[source]
Raising rates works in the beginning with high fiscal deficit driven inflation by slowing demand and bank lending.

But raising interest rates and keeping them high in an environment where runaway government deficits and high government debts are causing inflation runs the risk of exacerbating inflation.

You have high interest rates on a large amount of government debt which continues to push _more_ money into the economy.

The Fed doesn’t have any real options at this point but to lower rates.

replies(4): >>41897234 #>>41898872 #>>41899071 #>>41902847 #
68. aurareturn ◴[] No.41896722{4}[source]
But computer chips have always had limited usefulness because newer chips are simply faster and more efficient. The datacenter build outs and increase in electricity capacity will always be useful.
69. HarHarVeryFunny ◴[] No.41896726[source]
Similarly, I like to compare AI (more specifially LLM) "investment" to the cost of building the channel tunnel between UK and France. The original investors lost their shirt, but once built it is profitable to operate.
replies(1): >>41898651 #
70. Melting_Harps ◴[] No.41896728{5}[source]
zifpanachr23 said:

> AI people sound more dug in to be honest from my perspective. But I guess that's cause the crypto stuff tends to be less overtly religious and more overtly batshit crazy politics and economics, which I'm much more used to dealing with haha. And mostly, everyone has figured out the scam by now on the crypto side.

>>The AI people freak me out cause they are all talking eschatology and shit as if they have stumbled upon the literal ark of the covenant like in raiders of the lost ark or something.

>>>It's a really great act to be honest. They've been clearly studying a lot of the more dishonest American religious culture of the last couple of decades.

I'm going to commit a HN faux pas to prove a point and show you why I think Bitcoin has a valid use case here alone: I decided to repost what he said because there are valid points here and are worth discussing.

Had I the inclination I can hash this into the blockchain for all to see what was written from this poster for the aforementioned reasons for as long as the mainchain continues to be maintained, protected and supported .

This has great utility, and the mere suggestion that you cannot get over that is because those 'crazies offend me and my disposition' and stop there you fail to see why and what this technology can already do--create an actual immutable archive of all Human history if we desire it.

But to his point, yes it's roots in Crypto-Anarchism (which started in CA at the inception of the rise of modern SV btw) has many of you questioning the ''sanity' and 'motives' behind this technology, and you assume they are all the same but rest assure their is a reason for the brain drain from all of tech/STEM/finance during my era and time in Bitcoin.

Most of whom are now incredibly wealthier than they ever were working in academia or private industry if you think money is a measure of one's success--I don't, but most of you do.

The AI people strike me more as a range of the introduction corpos from banking and academia into bitcoin (Gavin's, Hearn) to total con men like Veer and sprinkled in there are the cult memebers you mentioned who honestly think that their techno-utopian trans-humanist dreams are being built one LLM update at a time. It's sad... it's the same thing just different names/faces.

replies(1): >>41898404 #
71. throwaway314155 ◴[] No.41896730{6}[source]
I have little interest in the ycombinator legacy of hustle culture, growth hacking, get-millions-for-glorified-todo-app, etc. As far as I can tell it is effectively the underlying reason for why crypto and AI get hyped up to the point where we can't have reasonable discussions about them in forums.

I'm just here because it happens to be where like-minded people (_sometimes_) hang out.

replies(1): >>41897025 #
72. aurareturn ◴[] No.41896734{6}[source]
I'm aware that one can't feed millions of messages into an LLM all at once. The only way to do this now is to use a RAG approach. But RAG approach has pros and cons and can miss crucial information. I think context window still matters a lot. The bigger the window, the more information you can feed in and the quality of answer should increase.

The point I'm trying to make is that increase context window will require more compute. Hence, we could still just be in the beginning of the compute/AI boom.

replies(1): >>41898924 #
73. aurareturn ◴[] No.41896761{4}[source]
But that happens in every bubble. Over investment, consolidation, huge winners in the end, and maybe eventually a single monopoly.
replies(1): >>41898544 #
74. HarHarVeryFunny ◴[] No.41896790{3}[source]
> the more compute you have, the smarter and more capable AI

Well, this is taken on faith by OpenAI/etc, but obviously the curve has to flatten at some point, and appears to already be doing so. OpenAI are now experimenting with scaling inference-time compute (GPT-O1), but have said that it takes exponential increases in compute to produce linear gains in performance, so it remains to be seen if customers find this a worthwhile value.

replies(1): >>41896900 #
75. aurareturn ◴[] No.41896900{4}[source]
GPT-o1 does demonstrate my point: the more compute you have, the smarter the AI.

If you run chain of thoughts on an 8B model, it becomes a lot smarter too.

GPT-o1 isn't GPT5 though. I think OpenAI will have a chain of thoughts model for GPT5-class models as well. They're separate from normal models.

replies(1): >>41896980 #
76. HarHarVeryFunny ◴[] No.41896980{5}[source]
There is only so much that an approach like O1 can do, but anyways in terms of AI boom/bust the relevant question is whether this is a viable product. All sorts of consumer products could be improved by making them a lot more expensive, but there are cost/benefit limits to everything.

GPT-5 and Claude-4 will be interesting, assuming these are both pure transformer models (not COT), as they will be a measure how much benefit remains to be had from training set scaling. I'd expect gains will be more against narrow benchmarks, than in the overall feel of intelligence (LLM arena score?) one gets from the model.

replies(1): >>41899221 #
77. Melting_Harps ◴[] No.41897025{7}[source]
> I have little interest in the ycombinator legacy of hustle culture, growth hacking, get-millions-for-glorified-todo-app, etc. As far as I can tell it is effectively the underlying reason for why crypto and AI get hyped up to the point where we can't have reasonable discussions about them in forums.

Ohh, well... I'm guilty of drinking that kool-aid, unfortunately and was a boot-strapping fintech founder with the battle scars to show--those scars appear in this discussion thus far by the way.

But I get it, I'm trying to be amicable about this as much as I can, especially fbecause I know no one can deny us anymore in the BTC side at this point, and in the AI side... well, money-hype boom and AI-doomer pr0n cycles aside we are actually building amazing amounts of compute that hopefully can yield amazing results, something like Starship recovery system levels of advancement in many other Industries/Sectors one day, probably very far off in the future but that's the start, right?

Every forest starts with a sappling kind of a thing, and these two technologies emerged from a time when I was in my formidable development period so I saw them as something more than just the marketing side--AI is marketing, ML is really just stats with code to back it up after all. Bitcoin is just a FOSS network using token based cryptographic key encryption.

Stop using throwaways if you want real conversations, it might help you have those conversations you're looking for. :)

78. Viliam1234 ◴[] No.41897047{4}[source]
I am happy with the LLMs, but I only tried them on small projects done at my free time.

As a back end developer I am not familiar with the latest trends in JavaScript and CSS, and frankly I do not want to spend my time studying these. A LLM can generate an interactive web game based on my description. I review the code, it is usually okay, sometimes I suggest an improvement. I could have done all of that -- but it would take me a week, and the LLM does it in seconds. So it is a difference between a hobby project done or not done.

I also tried a LLM at work, not to code, but to explain some complex topics that were new to me. Once it provided a great high-level description that was very useful. And once it provided a great explanation... which was a total lie, as I found out when I tried to do a hello-world example. I still think the 50% success rate is great, as long as you can quickly verify it.

Shortly, we need to know the strengths and the weaknesses, and use the LLMs accordingly. Too much trust will get you burned. But properly used, they can save a lot of time.

79. grecy ◴[] No.41897155{3}[source]
A couple of them went bankrupt and got bailouts.
80. bugbuddy ◴[] No.41897234{6}[source]
Public sector demand is a much smaller percentage of the overall economy. If raising rates did not slow the economy due to high government deficit spending, then we would certainly be living in a much different world of command economy with the government running everything. That’s not yet the world we live in.

Another possibility is that the rate is still not high enough and needs to be raised much much higher to stop inflation. I think rates need to be in the 6 to 7 percent to really stop inflation. The is just a pause. It will come back like a vengeance.

replies(1): >>41898884 #
81. aurareturn ◴[] No.41897866{4}[source]
>But yes, a lot of energy wasted in the growing phase.

Why exactly is energy wasted during this phase?

Are you expecting hardware to become obsolete much faster? But that only depends on TSMC's node cadence, which is still 2-3 years. Therefore, AI hardware will still be bound to TSMC's cadence.

82. nineteen999 ◴[] No.41897941[source]
You can always join the Wordpress tantrum discussions if you're getting fatigued, that ones really jumping around here lately.
replies(1): >>41899120 #
83. threeseed ◴[] No.41897988{3}[source]
I just asked Claude to generate some code using the SAP SuccessFactors API.

Every single example was completely useless. The code wouldn't compile, it would invent methods and variables and the instructions to go along with it were incoherent. All whilst gaslighting along with the way.

I have also previously tried using it with some Golang code and it would constantly add weird statements e.g. locking on non-concurrent operations.

LLMs are great when you are doing the same things as everyone else. Step outside of that and it's far more trouble than it's worth.

replies(2): >>41900666 #>>41900693 #
84. joshdavham ◴[] No.41898033[source]
I'm curious how this will affect cloud costs for the rest of us. On the one hand, we may get some economies of scale, but on the other hand, cloud resources being used up by others may drive prices up. Does anyone have any guesses as to what will happen?
replies(1): >>41898214 #
85. Ekaros ◴[] No.41898037{3}[source]
There is a market for AI. And it is exactly these articles and maybe pictures attached to them. Soon could be some videos as well. But how far beyond that. Is very good question.
86. ben_w ◴[] No.41898086[source]
> but now that renewables are the cheapest form of energy, I believe this huge, well-funded demand will accelerate the growth of non-carbon energy sources

I think the renewables would have been built at the same rate anyway precisely because they're so cheap; but nuclear power, being expensive, would not be built if this bubble had not happened, and somehow nuclear does seem to be getting some of this money.

replies(3): >>41898253 #>>41898260 #>>41903672 #
87. j_timberlake ◴[] No.41898094[source]
People are going to be talking about AI for the rest of your life, but feel free to go join an Amish community or live in the woods, maybe get a job as a Firewatch.
88. FeepingCreature ◴[] No.41898131{4}[source]
I mean, let me just throw in an example here: I am currently working on https://guesspage.github.io , which is basically https://getguesstimate.com but for flowtext instead of a spreadsheet. The site is ... 99.9% Claude Sonnet written. I have literally only been debugging and speccing.

Sonnet can absolutely get very confused and break things. And there were tasks where I had a really hard time getting it to do the right thing, or understand what I wanted. But I need you to understand: Sonnet made this thing for me in two and a half days of part-time prompting. That is probably ten times faster than it would have taken me on my own, especially as I have absolutely no design ability.

Now, is this a big project? No, it's like 2kloc. But I don't think you can call it "simple" exactly. It's potentially useful technology. This sort of "just make this small tool exist for me" is where I see most of the value for AI in the next year. And the definition of "small tool" can stretch surprisingly far.

replies(2): >>41898445 #>>41900028 #
89. GolfPopper ◴[] No.41898170[source]
I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.
replies(12): >>41898196 #>>41898203 #>>41898630 #>>41898961 #>>41899137 #>>41899339 #>>41900217 #>>41901033 #>>41903589 #>>41903712 #>>41905312 #>>41908344 #
90. Ekaros ◴[] No.41898196[source]
I believe that there is lot of content creation where quality really does not matter. And hallucinations don't really matter. Unless they are legally actionable, that is something like hate speech or libel.

Throwing dozens of articles, social media posts and why not even videos. Hallucinations really don't matter at scale. And enough content is already generating enough views to make it somewhat viable strategy.

replies(5): >>41899664 #>>41899850 #>>41900982 #>>41901372 #>>41905356 #
91. dragonwriter ◴[] No.41898203[source]
Humans also confabulate (a better metaphor for AI errors than hallucination) when called on to respond without access to the ground truth, and most AI models have limited combination of access and ability to use that access when it comes to checking ground truth.
92. from-nibly ◴[] No.41898206[source]
The problem is that all that mal investment will get bailed about by us regular shmucks. Get ready for the hamster wheel to start spinning faster.
93. aurareturn ◴[] No.41898214[source]
I doubt it will increase cost for traditional CPU-based clouds. Just take a look at Ampere 192 core and AMD 196 core CPUs. Their efficiency will continue to drive down traditional cloud $/perf.
replies(1): >>41899400 #
94. atomic128 ◴[] No.41898253{3}[source]
I want to point out to anyone who's interested in the nuclear angle that even before the AI data center demand story arrived, the uranium market was facing a persistent undersupply for the first time in its many decades of history. As a result, the (long-term contract) price of uranium has been steadily rising for years: https://www.cameco.com/invest/markets/uranium-price

After Fukushima (https://news.ycombinator.com/item?id=41768726), Japanese reactors were shut down and there was a glut of uranium available in the spot market. Simultaneously, Kazatomprom flooded the market with cheap ISR uranium. The price of uranium fell far below the cost of production and the mining companies were obliterated. The few miners that survived via their long-term contracts (primarily Cameco) put their less efficient mines into care and maintenance.

Now we're seeing the uranium mining business wake up. But after a decade of bear-market conditions the miners cannot serve the demand: they've underinvested, they've lost skilled labor, they've shrunk. The rebound in uranium supply will be slow, much slower than the rebound in demand. This is because uranium mining is an extremely difficult process. Look at how long NexGen Energy's Rook 1 Arrow mine has taken to develop, and that's prime ore (https://s28.q4cdn.com/891672792/files/doc_downloads/2022/03/...). Look at Kazatomprom's slowing growth rate (https://world-nuclear-news.org/Articles/Kazatomprom-lowers-2...), look at the incredible complexity of Cameco's mining operations: https://www.petersenproducts.com/articles/an-inflatable-tunn...

Here is a discussion of the uranium mining situtation: https://news.ycombinator.com/item?id=41661768 (including a very risky method of profiting from the undersupply of uranium, stock ticker SRUUF, not recommended). Note that Numerco's uranium spot price was put behind a paywall last week. You can still get the intra-day spot uranium price for free here: https://www.yellowcakeplc.com/

replies(1): >>41898979 #
95. synergy20 ◴[] No.41898260{3}[source]
based on my reading nuclear power is much cheaper overall compared to wind solar etc?
replies(2): >>41898377 #>>41898565 #
96. dangerwill ◴[] No.41898284{3}[source]
I think some of the consternation we see from the anti LLM crowd (of which I'm one) is this line of reasoning. These LLMs produce fine code when the code you are asking for is in its training set. So they can be better than a mid level dev and much faster in narrow, unknown contexts. But with no feedback to warn you, if you ask it for code that it has no or only a bit of data on, it is much worse than a rubber duck.

That and tech's status inflation means when we are talking about "mid level" engineers, really we are talking about engineers with a couple years of experience who have just graduated to the training wheels phase of producing production code. LLMs are still broadly aimed at removing the need for what I would just call junior engineers.

replies(2): >>41898719 #>>41904375 #
97. kjkjadksj ◴[] No.41898291[source]
Not all of that infrastructure gets soaked up, plenty is abandoned. Look at the state of american passenger rail for example and how quickly the bottom of that industry dropped out. Many old rail right of ways sit abandoned today. Likewise with telecoms, e.g. the microwave network that also sits abandoned today.
replies(1): >>41900207 #
98. aaronblohowiak ◴[] No.41898324{5}[source]
>they get enough idioms in C++ slightly incorrect

this is part of why I stay in python when doing ai-assisted programming; there's so much training information out there for python and I _generally_ don't care about if its slightly off-idiom, its still probably fine.

replies(1): >>41900097 #
99. __MatrixMan__ ◴[] No.41898327{3}[source]
AI trained on a web that's primarily about selling things
100. dangerwill ◴[] No.41898343{3}[source]
Yes, I agree wholeheartedly and I actively dislike the concept of LLMs for anything real. But nothing will be worse than the flood of outright scams or attempts at rent seeking/middle man creation that crypto was. At least LLMs have some potential use cases (and line level auto complete is genuinely better now because of LLMs)
101. atomic128 ◴[] No.41898377{4}[source]
Yes, that's right. See the recent discussion here:

https://news.ycombinator.com/item?id=41860341

Basically, nuclear fission is clean baseload power. Wind and solar are not baseload power sources. They don't really compete. See discussion here: https://news.ycombinator.com/item?id=41858892

Furthermore, we're seeing interest (from Google and Amazon and Dow Chemical) in expensive but completely safe TRISO (HALEU) reactors (https://www.energy.gov/ne/articles/triso-particles-most-robu...). These companies want clean baseload power, with no risk of meltdown, and they're willing to pay for it. Here's what Amazon has chosen: https://x-energy.com/fuel/triso-x

TRISO (HALEU) reactors use more than 1.5 times the natural uranium per unit of energy produced because the higher burnup is offset by higher enrichment inputs (see page 11 at https://fuelcycleoptions.inl.gov/SiteAssets/SitePages/Home/1...), and the fuel is even more expensive to manufacture, but they are completely safe. This is a technology from the 1960's but it's attractive now because so much money is chasing clean baseload nuclear fission for data centers.

These "impossible to melt down" TRISO small modular nuclear fission reactors are what Elon Musk was talking about on the campaign trail last week, when he said:

  ELON MUSK: "The dangers of nuclear power are greatly
  overstated. You can make a nuclear reactor that is
  literally impossible to melt down even if you tried to
  melt it down. You could try to bomb the place, and it
  still wouldn't melt down. There should be no regulatory
  issues with that. There should be significant nuclear 
  reform."
https://x.com/AutismCapital/status/1847452008502219111
replies(1): >>41898598 #
102. 123yawaworht456 ◴[] No.41898392{3}[source]
holy shit, if that isn't satire... wow, just fucking wow.
103. dangerwill ◴[] No.41898404{6}[source]
No one cares that blockchains are immutable, that doesn't mean that information written there is correct. Just that it was written on X date and the content. You could find proof of the biggest scandal of all time and post it on the blockchain so "the man" can't stop the word from getting out but 99.99999999% of readers would read a version presented by a simple web server and with the value cached in a closed database or memory. Which if the government wants to take down, it can. And if that does go down, no one will save a link to the entry on the blockchain. And on the flip side I could write obvious falsehoods in the same way. Blockchain provides no value for legal attestation nor information distribution.

In practice, the vast majority of blockchain ledgers record the history of scams, penny stock style trading and money laundering attempts.

replies(1): >>41900074 #
104. dangerwill ◴[] No.41898426[source]
Generated text =! Intelligence.
105. jacobgorm ◴[] No.41898436[source]
Railroads and computer networks create network effects, I am not sure the same is true for data centers full of hardware that becomes outdated very quickly.
replies(2): >>41898465 #>>41903847 #
106. hnthrowaway6543 ◴[] No.41898445{5}[source]
This is a simple project. Nobody is disputing that GenAI can automate a large chunk of the initial setup work, which dominates the time spent on small projects like this. But 99.999% of professional, paid software development is not working on the basic React infrastructure for a 2,000 loc javascript app.

Also your Google Drive API key is easily discoverable with about 15 seconds of looking at the JS source code -- this is something a professional software developer would (hopefully) have picked up without you asking, but an LLM isn't going to tell you that you shouldn't ship the `const API_KEY = ...` code as a file to the client, because you didn't ask.

replies(1): >>41898572 #
107. CSMastermind ◴[] No.41898465{3}[source]
If they're building new power plants to support all those data centers than that power generation capacity might be put to good use doing something else.
replies(2): >>41903765 #>>41903844 #
108. openrisk ◴[] No.41898466[source]
There is an interesting contrast between the phenomenal investment boom versus functionally zero job growth...

> Even the software publishers and computing infrastructure industries at the forefront of this AI boom have seen functionally zero net employment growth over the last year - the dismal job market that has beleaguered recent computer science graduates simply has not improved much.

... which may explain, in broad brush, the polarised HN attitude: bitter cynics on one side and aggresive zealots on the other.

109. jiggawatts ◴[] No.41898540[source]
Something people forget is that a training cluster with tens of thousands of GPUs is a general purpose supercomputer also! They can be used for all sorts of numerical modelling codes, not just AI. Protein folding, topology optimisation, route planning, satellite image processing, etc…

We bought a lot of shovels. Even if we don’t find more gold, we can dig holes for industry elsewhere.

replies(1): >>41899552 #
110. danielmarkbruce ◴[] No.41898544{5}[source]
There isn't a rule as to how it plays out. No huge winners in cars, no huge winners in rail. Lots of huge winners in internet.
replies(1): >>41899300 #
111. ViewTrick1002 ◴[] No.41898565{4}[source]
Not at all. Old paid of nuclear plants are competitive but new builds are insanely expensive leading to $140-220/MWh prices for the ratepayers before factoring in grid stability and transmission costs.[1]

The US has zero commercial reactors under construction and this is for one reason: economics.

The recent announcements from the hyperscalers are PPAs. If the company building the reactor can provide power at the agreed price they will take it off their hands. Thus creating a more stable financial environment to get funding.

They are not investing anything on their own. For a recent example NuScale another SMR developer essentially collapsed when their Utah deal fell through when nice renders and PowerPoints met real world costs and deadlines. [2]

[1]: https://www.lazard.com/media/gjyffoqd/lazards-lcoeplus-june-...

[2]: https://iceberg-research.com/2023/10/19/nuscale-power-smr-a-...

replies(2): >>41898819 #>>41899855 #
112. FeepingCreature ◴[] No.41898572{6}[source]
> This is a simple project.

I mean, it would have taken me a lot longer on my own. Sure it's not a huge project, I agree; I wouldn't call it entirely trivial.

> Also your Google Drive API key is easily discoverable with about 15 seconds of looking at the JS source code

No, I'm aware of that. That's deliberate. There's no way to avoid it for a serverless webapp. (Note that Guesspage is entirely hosted on Github Pages.) All the data stored is public anyways, the key is limited to only have permission to access the stored data, and you still have to log in and grab a token that is only stored in your browser and cannot be accessed from other sites. Literally the only unique thing you can do with it is trigger a login request on your own site that looks like it comes from Guesspage; and you can do that just as easily by creating a new API key and setting its name to "Guesspage".

The AI actually told me that was unsafe, and I corrected it. To the best of my understanding, the only thing that you can do with the API key is do Google Drive uploads to your own drive or that of someone who lets you that look to Google as if my app is triggering them. If there's a danger that can arise from that, and I don't think there is, then it's on me, not on Sonnet.

(It's also referer domain limited, but that's worthless. If only there was a way to cryptographically sign a referer...)

replies(1): >>41900008 #
113. ViewTrick1002 ◴[] No.41898598{5}[source]
> Basically, nuclear fission is clean baseload power. Wind and solar are not baseload power sources. They don't really compete.

This means you don't understand how the grid works. California's baseload is ~15 GW while it peaks at 50 GW.

New built nuclear power is wholly unsuitable for load following duty due to the economics. It is an insane prospect when running at 100% 24/7, and even worse when it has to adapt.

Both nuclear power and renewables need storage, flexibility or other measures to match their inflexibility to the grid.

See the recent study where it was found that nuclear power needs to come down 85% in cost to be competitive with renewables, due to both options requiring dispatchable power to meet the grid load.

> The study finds that investments in flexibility in the electricity supply are needed in both systems due to the constant production pattern of nuclear and the variability of renewable energy sources. However, the scenario with high nuclear implementation is 1.2 billion EUR more expensive annually compared to a scenario only based on renewables, with all systems completely balancing supply and demand across all energy sectors in every hour. For nuclear power to be cost competitive with renewables an investment cost of 1.55 MEUR/MW must be achieved, which is substantially below any cost projection for nuclear power.

https://www.sciencedirect.com/science/article/pii/S030626192...

> These companies want clean baseload power, with no risk of meltdown, and they're willing to pay for it. Here's what Amazon has chosen

The recent announcements from the hyperscalers are PPAs. If the company building the reactor can provide power at the agreed price they will take it off their hands. Thus creating a more stable financial environment to get funding.

They are not investing anything on their own. For a recent example NuScale another SMR developer essentially collapsed when their Utah deal fell through when nice renders and PowerPoints met real world costs and deadlines.

https://iceberg-research.com/2023/10/19/nuscale-power-smr-a-...

> with no risk of meltdown

Then we should be able to remove the enormous subsidy the Price Anderson act adds to the industry right? Let all new reactors buy insurance for a Fukushima level accident in the open market.

Nuclear powerplants are currently insured for ~0.05% of the cost of a Fukushima style accident and pooled together the entire US industry covers less than 5%.

https://en.wikipedia.org/wiki/Price%E2%80%93Anderson_Nuclear...

114. edanm ◴[] No.41898630[source]
You don't really need to imagine this though - generative AI is already extremely useful in many non-nice applications.
replies(1): >>41900379 #
115. tim333 ◴[] No.41898651{3}[source]
I was an original investor. I still have shirts and shares in it but they could have done better.
116. whiplash451 ◴[] No.41898703[source]
My experience is similar. I used Claude for a coding task recently and it drove me into an infinite number of rabbit holes, each one seeming worse than the previous one. All the while being enable to stop and say: I’m sorry, I actually don’t know how to help you.
117. jackcosgrove ◴[] No.41898712{3}[source]
> One gauge I use to determine if we are still at the beginning of the boom is this

Has your barber/hairdresser recommended you buy NVDA?

replies(2): >>41898939 #>>41903766 #
118. whiplash451 ◴[] No.41898719{4}[source]
That and the fact that code does not live in a standalone bubble, but in a complex setup of OSes, APIs, middleware and other languages. My experience trying to use Claude to help me with that was disappointing.
replies(1): >>41905443 #
119. m101 ◴[] No.41898811[source]
There is a comment on this thread about this being like the railroads, but this is nothing like the railroads except insofar as it costs a lot of money.

The railroads have lasted decades and will remain relevant for many more decades. They slowly wear out, and they are the most efficient form of land transport.

These hardware investments will all be written off in 6 years time and won't be worth running given the power costs and relative output. They will be junked.

There's also the extra risk that for some reason future AI systems just don't run efficiently on current gen hardware.

replies(5): >>41899046 #>>41900169 #>>41900225 #>>41903674 #>>41905422 #
120. floren ◴[] No.41898819{5}[source]
> leading to $140-220/MWh prices for the ratepayers

I'm on PG&E, I wish I could get my electricity for only $0.14/kWh

replies(1): >>41898926 #
121. thwarted ◴[] No.41898863{6}[source]
An LLM is an application that runs on an operating system like any other application. That the vendor of the operating system has tied it to the operating system is purely a marketing/force-it-onto-your-device/force-it-in-front-of-your-face play. It's forced bundling, just like Microsoft did with Internet Explorer 20 years ago.
replies(1): >>41899134 #
122. bubbleRefuge ◴[] No.41898872{6}[source]
Wow! Rare to see someone get it. MMT follower ? Would add that the money printing is being distributed proportionaly to the wealthy under a Deomcrat regime. Pretty sad.
123. bubbleRefuge ◴[] No.41898884{7}[source]
Ask Argentina about that. They started finally reducing rates and its working some.
124. reissbaker ◴[] No.41898924{7}[source]
We might be even earlier — the 90s was a famous boom with a fast bust, but to me this feels closer to the dawn of the personal computer in the late 70s and early 80s: we can automate things now that were impossible to automate before. We might have a long time before seeing diminishing returns.
125. ViewTrick1002 ◴[] No.41898926{6}[source]
That cost is excluding grid stability and transmission costs.

From what I’ve understood PG&E’s largest problem is the massive payouts and infrastructure upgrades needed from the wildfires, not the cost of the electricity itself.

replies(1): >>41911465 #
126. arach ◴[] No.41898939{4}[source]
There was an NVDA earnings watch party in NY this summer and Jensen signed some boobs earlier this year. There are some signs but still room to run
127. bbor ◴[] No.41898957{3}[source]
I think your analysis is spot on, based on my expertise of "reading too much hacker news since every day since March 2023". This Bain report[1] barely mentions "Independent Software Vendors" and when they do it's clearly an afterthought, and the only software focused takes I can find are[2] extremely speculative, e.g.

  "For every dollar invested in hardware, we expect 8-to-20 times the amount to be spent on software... While the initial wave of AI investment lays the foundational infrastructure, the next wave is clearly set to capitalise on the burgeoning AI software market."
I'm hoping someone more knowledgeable about capital markets can fill us in here, I'd be curious to see some hard numbers still! Maybe this is what a Bloomberg terminal does...?

Regardless, I think this makes a lot of sense; there's no clear scientific consensus on the path forward for these models other than "keep going?", so building out preparatory infrastructure is seen as the clear, safe move. As the common refrain goes: "in a gold rush, sell shovels!"

As a big believer in the upcoming cognitive era of software and society, I would only add a short bit onto the end of that saying: "...until the hydraulic mining cannons[3] come online."

[1] https://www.bain.com/insights/ais-trillion-dollar-opportunit...

[2] https://www.privatebankerinternational.com/comment/is-ai-sof...

[3] https://en.wikipedia.org/wiki/Hydraulic_mining

replies(1): >>41900553 #
128. zone411 ◴[] No.41898961[source]
Confabulations are decreasing with newer models. I tested confabulations based on provided documents (relevant for RAG) here: https://github.com/lechmazur/confabulations/. Note the significant difference between GPT-4 Turbo and GPT-4o.
replies(3): >>41900075 #>>41900092 #>>41905577 #
129. ben_w ◴[] No.41898979{4}[source]
Uranium, at least the un-enriched kind you can just buy, was never the problem.

Even the peak of that graph (136… er, USD per lb?) is essentially a rounding error compared to everything else.

0.00191 USD/kWh? Something like that, depends on the type of reactor it goes in.

replies(1): >>41899042 #
130. tim333 ◴[] No.41899018{3}[source]
You can never really tell, though following some market tea leaf readers they seem to think a few months from now, after a bit of a run up in the market. Here's one random datapoint, on mentions of "soft landing" in Bloomberg https://x.com/bravosresearch/status/1848047330794385494
replies(1): >>41899100 #
131. torginus ◴[] No.41899019[source]
I am very skeptical of the positive effects of infite intelligence on the living standards of knowledge workes.

On the more pessimistic end, AI will replace us and we'll be sent to the coal mines.

On the possibly most optimistic end, living standards are a composite of many things rooted in reality, so I'd say the actual cap is about doubling of life quality, which is not nothing, but not unprecendented if we look at the past century and a half.

replies(1): >>41900164 #
132. mvdtnz ◴[] No.41899040{4}[source]
My workpalce uses Glean and since it was connected to Slack it has become significantly worse. It routinely gives incorrect or VERY incomplete information, misattributes work to developers who may have casually mentioned a project at some time and worst of all presents jokes or sarcastic responses as fact.

Not only is it an extremely poor source of information, it has ruined the company's Slack culture as people are no longer willing to (for lack of a better term) shitpost knowing that their goofy sarcasm will now be presented to Glean users as fact.

replies(2): >>41899457 #>>41906216 #
133. atomic128 ◴[] No.41899042{5}[source]
You are correct. This is one of the advantages of nuclear power.

The fuel is a tiny fraction of the cost of running the plant. See discussion here, contrasting with natural gas: https://news.ycombinator.com/item?id=41858892

It is also important that the fuel is physically small so you can (and typically, do) store years of fuel on-site at the reactor. Nuclear is "secure" in the sense that it can provide "energy security".

replies(1): >>41899398 #
134. tim333 ◴[] No.41899046[source]
Some stuff like the buildings and power supplies will probably remain good. But year, probably new chips in a short while.
replies(1): >>41899310 #
135. kibwen ◴[] No.41899071{6}[source]
> keeping them high

At some point we need to address the elephant in the room and ask people specifically what they mean by "high" rates, because 5% isn't particularly high by historical terms, it's only high for people who never paid attention to interest rates before 2010.

replies(1): >>41904684 #
136. aurareturn ◴[] No.41899100{4}[source]
I read through a few pages of tweets from this author and it looks just like another perpetual doomsday pundit akin to Zerohedge.
replies(1): >>41899171 #
137. kevindamm ◴[] No.41899120{3}[source]
I'm designing an extension of datalog instead.
138. aurareturn ◴[] No.41899134{7}[source]
I predict that OpenAI will try to circumvent iOS and Android by making their own device. I think it will be similar to Rabbit R1, but not a scam, and a lot more capable.

They recently hired Jony Ive on a project - it could be this.

I think it'll be a long term goal - maybe in 3-4 years, a device similar to the Rabbit R1 would be viable. It's far too early right now.

replies(5): >>41900088 #>>41900923 #>>41900926 #>>41902275 #>>41903291 #
139. sean_pedersen ◴[] No.41899137[source]
https://github.com/stanford-oval/WikiChat
140. tim333 ◴[] No.41899171{5}[source]
Well yeah there may be a bit of that. I find them quite interesting for the data they bring up like the linked tweet but I don't really have an opinion as to whether they are any good at predicting things.

I was thinking re the data in the tweet, that there were a lot of mentions of "soft landing" before the dot com crash, before the 2006 property crash and now, it is quite likely there was an easy money policy preceding them and then government policy mostly focuses on consumer price inflation and unemployment, so they relax when those are both low and then hit the brakes when inflation goes up and then it moderates and things look good similar to now. But that ignores that easy money can also inflate asset prices, eg dot com stocks, houses in 06, or money losing AI companies like now. And then at some point that ends and the speculative asset prices go down rather than up, leaving people thinking oh dear we've borrowed to put loads of money into that dotcom/house/ai thing and now it's not worth much and we still have the debts...

At least that's my guess.

replies(2): >>41901413 #>>41903649 #
141. Terr_ ◴[] No.41899201{3}[source]
> Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat?

Note that the presence of such a feature isn't the same as whether it's secure enough for normal use.

In particular, anything anyone said in the last 2 years in chat could poison the LLM into exfiltrating your data or giving false results chosen by the attacker, because of the fundamental problems of LLMs.

https://promptarmor.substack.com/p/data-exfiltration-from-sl...

142. andxor ◴[] No.41899214[source]
I don't take financial advice from HN and it has served me well.
replies(1): >>41900111 #
143. aurareturn ◴[] No.41899221{6}[source]
I think OpenAI has already proven that it's a viable product. Their gross margins must be decent. I doubt they're making a loss for every token they inference.
replies(1): >>41899461 #
144. aurareturn ◴[] No.41899300{6}[source]
There were huge winners in cars. Ford and GM have historically been huge companies. Then oil companies became the biggest companies in the world mostly due to cars.
replies(1): >>41900792 #
145. jacurtis ◴[] No.41899310{3}[source]
Power plants and power infrastructure are probably an example of a positive consequence that comes from this.

We have been terrified to whisper the words "nuclear power" for decades now, but the AI boom is likely to put enough demand on the power grid that it forces us to face this reality and make appropriate buildouts.

Even if the AI Boom crashes, these power plants will have positive impacts on the country for decades, likely centuries to come. Keeping bountiful power available and likely low-cost.

replies(2): >>41899434 #>>41903541 #
146. jacurtis ◴[] No.41899339[source]
I've never met a human that doesn't "hallucinate" either (either intentionally or unintentionally). Humans either intentionally lie or will fill in gaps in their knowledge with assumptions or inaccurate information. Most human generated content on social media is inaccurate, to an even higher percentage than what ChatGPT gives me.

I guess humans are worthless as well since they are notoriously unreliable. Or maybe it just means that artificial intelligence is more realistic than we want to admit, since it mimics humans exactly as we are, deficiencies and all.

This is kind of like the self-driving car debate. We don't want to allow self-driving cars until we can guarantee that they have a zero percent failure rate.

Meanwhile we continue to rely on human drivers which leads to 50,000 deaths per year in America alone, all because we refuse to accept a failure rate of even one accident from a self-driving car.

replies(2): >>41899534 #>>41904280 #
147. ben_w ◴[] No.41899398{6}[source]
It would only be an advantage if everything in the power plant else wasn't so expensive.

And I'm saying that as someone who finds all this stuff cool and would like to see it used in international shipping.

replies(1): >>41899493 #
148. joshdavham ◴[] No.41899400{3}[source]
> I doubt it will increase cost for traditional CPU-based clouds.

Yeah I think you’re right about that. But what about GPU’s? Will they benefit from economies of scale or the opposite?

149. WillyWonkaJr ◴[] No.41899434{4}[source]
It is so bizarre that reducing pollution was not a sufficient driver to build more nuclear power, but training AI models is.
replies(4): >>41902188 #>>41904922 #>>41905710 #>>41907077 #
150. dcsan ◴[] No.41899457{5}[source]
Maybe have some off limits to glean shit posting channels?
151. HarHarVeryFunny ◴[] No.41899461{7}[source]
I don't think they've broken out O1 revenue, but it must be very small at the moment since it was only just introduced. Their O1-preview pricing doesn't seem to reflect the exponential compute cost, so perhaps it is not currently priced to be profitable. Overall, across all models and revenue streams, their revenue does exceed inference costs ($4B vs $2B), but they still are projected to lose $5B this year, $14B next year, and not make a profit until 2029 (and only then if they've increased revenue by 100x ...).

Training costs are killing them, and it's obviously not sustainable to keep spending more on research and training than the revenue generated. Training costs are expected to keep growing fast, while revenue per token in/out is plummeting - they need massive inference volume to turn this into a profitable business, and need to pray that this doesn't turn into a commodity business where they are not the low cost producer.

https://x.com/ayooshveda/status/1847352974831489321

https://x.com/Gloraaa_/status/1847872986260341224

replies(1): >>41900349 #
152. atomic128 ◴[] No.41899493{7}[source]
Discussed at length here: https://news.ycombinator.com/item?id=41863388

I already linked this above, twice. I know it's a hassle to read, it's Sunday afternoon, so don't worry about it.

It's not important whether you as an individual get this right or not, as long as society reaches the correct conclusion. Thankfully, we're seeing that happen, a worldwide shift toward the adoption of nuclear power.

Have a pleasant evening!

153. tim333 ◴[] No.41899534{3}[source]
It's not quite the case with cars though - people are ok with Waymos which are not zero accident but probably safer than human drivers. The trouble with other systems like Tesla FSD is they are probably not safer than human yet if you don't have a human there nannying them.

Similarly I think people will be ok with other AI if it performs well.

154. fhdsgbbcaA ◴[] No.41899552{3}[source]
I think there is an LLM bubble for sure, but I’m very bullish on the ease with which one can generate new specialized models for various tasks that are not LLM.

For example, there’s a ton of room for developing all kinds of low latency, highly reliable, embedded classifiers in a number of domains.

It’s not as gee-whiz/sci-fi as an LLM demo, but I think potentially much bigger impact over time.

replies(2): >>41900350 #>>41901612 #
155. tim333 ◴[] No.41899595[source]
Of the top 30 HN stories of the last month (https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=fa...)

only 6 were AI, the highest being "OpenAI to become for-profit" coming at number 10. Top story was "Bop Spotter" followed by Starship and click to cancel.

replies(1): >>41904748 #
156. layer8 ◴[] No.41899607{5}[source]
> If it's knowledge (the intricacies of some arcane OS API, for example), an LLM can do very well

Only if that knowledge is sufficiently represented in the training data or on the web. If, on the other hand, it’s knowledge that isn’t well (or at all) represented, and instead requires experience or experimentation with the relevant system, LLMs don’t do very well. I regularly fail with applying LLMs to tasks that turn out to require such “hidden” knowledge.

replies(2): >>41901007 #>>41907553 #
157. wslh ◴[] No.41899652[source]
I wonder what lessons the current hardware-intensive AI boom could learn from the history of proof-of-work (PoW) mining, particularly regarding energy consumption, hardware specialization, and market dynamics.
158. fsndz ◴[] No.41899659[source]
Building robust LLM-based applications is token-intensive. You often have to plan for the parsing and digestion of a lot of tokens for summarization or even retrieval augmented generation. Even the mere generation of marketing blogposts consumes a lot of output tokens in most cases. Not to mention that all robust cognitive architectures often rely on the generation of several samples for each prompt, custom retry logics, feedback loops, and reasoning tokens to achieve state of the art performance, all solutions powerfully token-intensive.

Luckily, the cost of intelligence is quickly dropping. GPT-4, one of OpenAI’s most capable models, is now priced at $2.5 per million input tokens and $10 per million output tokens. At its initial release in March 2023, the cost was respectively $10/1M input tokens and $30/1M for output tokens. That’s a huge $7.5/1M input tokens and $20/1M output tokens reduction in price. https://www.lycee.ai/blog/drop-o1-preview-try-this-alternati...

159. flashman ◴[] No.41899664{3}[source]
> quality really does not matter

What an inspiring vision for the future of news and entertainment.

160. Mengkudulangsat ◴[] No.41899790{3}[source]
All those future spare GPUs will make video game streaming dirt cheap.

Even poor people can enjoy 8k gaming on a phone soon.

replies(1): >>41900711 #
161. xk_id ◴[] No.41899850{3}[source]
It amazes me the level of nihilism needed to talk about this with casual indifference.
162. synergy20 ◴[] No.41899855{5}[source]
Thanks! I always thought it is due to people's safety concerns here instead of economic reasons. After all, nuclear plant is quite 'popular' in Europe, and China too these days.
replies(2): >>41900266 #>>41903154 #
163. WgaqPdNr7PGLGVW ◴[] No.41900008{7}[source]
> I wouldn't call it entirely trivial.

It just doesn't represent a realistic codebase. It is significantly smaller than a lot of college projects.

The current software system I'm working on now is ~2 million lines of code split across a dozen services.

AI has been pretty good for search across the codebases and absolutely hopeless for code gen.

LLMs just aren't that good yet for writing code on a decent sized system.

replies(1): >>41901386 #
164. mvdtnz ◴[] No.41900028{5}[source]
This is a ludicrously simple app and also - the code[0] is of very poor quality.

[0] https://github.com/Guesspage/guesspage.github.io/blob/master...

replies(1): >>41901393 #
165. Melting_Harps ◴[] No.41900074{7}[source]
> In practice, the vast majority of blockchain ledgers record the history of scams, penny stock style trading and money laundering attempts.

It doesn't take long before they make themselves present. Thanks for proving my point.

166. ◴[] No.41900075{3}[source]
167. marcus_holmes ◴[] No.41900088{8}[source]
Even if this is true (and I'm not saying it's not), they probably won't create their own OS. They'd be smarter to do what Apple did and clone a BSD (or similar) rather than start afresh.
replies(2): >>41901274 #>>41903781 #
168. tkgally ◴[] No.41900092{3}[source]
That’s very interesting! Thanks for the link.
169. ryandrake ◴[] No.41900097{6}[source]
Yea, I was thumbs-down on ai-assisted programming because when I tested it out, I tried it by adding things to my existing C and C++ projects, and its suggestions were... kind of wild. Then, a few months later I gave it another chance when I was writing some Python and was impressed. Finally, I used it on a new-from-blank-text-file Rust project and was pretty much blown away.
replies(4): >>41900253 #>>41900255 #>>41900878 #>>41901107 #
170. 3abiton ◴[] No.41900111[source]
How do you know if you had have taken some advice, it might have served you better?
replies(1): >>41900533 #
171. Eisenstein ◴[] No.41900123{5}[source]
Considering that models have been getting more powerful for the same number of parameters -- all of it.
replies(1): >>41903853 #
172. Eisenstein ◴[] No.41900164{3}[source]
The ideal scenario in my mind is that intelligent and benevolent AI takes over running things and then people have to figure out how to make their life meaningful with only leisure time.

There is no long-term time scale in which humans running things do not obliterate each other or end up sitting on a planet filled with trash.

Either we figure out how to colonize other planets or we hand over the reigns to something that can plan long term and not be irrational.

Maybe if we figure out immortality it might work, but with the short life span of a human there is no way to not be short-sited or eventually end up with the wrong person in charge the button.

replies(2): >>41900314 #>>41901110 #
173. ◴[] No.41900169[source]
174. jillesvangurp ◴[] No.41900192{3}[source]
I think a better analogy is the valuation of Intel vs. that of Microsoft. For a long time, Intel dominated the CPU market. So you'd expect them to be the most valuable company. Instead, a small startup called Microsoft started dominating the software market and eventually became the most valuable company on the planet. The combined value of the software market is orders of magnitudes larger than that of all chip makers combined and has been for quite some time. The only reason people buy hardware is to use software.

The same is going to happen with AI. Yes, Nvidia and their competitors are going to do well. But most of the value will be in software ultimately.

GPUs and data centers are just infrastructure. Same for the electricity generation needed to power all that. The demand for AI is causing there to be a lot of demand for that stuff. And that's driving the cost of all of it down. The cheapest way to add power generation is wind and solar. And both are dominating new power generation addition. Chip manufacturers are very busy making better, cheaper, faster etc. chips. They are getting better rapidly. It's hard to see how NVidia can dominate this market indefinitely.

AI is going to be very economical long term. Cheap chips. Cheap power. Lots of value. That's why all the software companies are busy ramping up their infrastructure. IMHO investing in expensive nuclear projects is a bit desperate. But I can see the logic of not wanting to fall behind for the likes of Amazon, Google, MS, Apple, etc. They can sure afford to lose some billions and it's probably more important to them to have the power available quickly than to get it cheaply.

replies(4): >>41900408 #>>41900419 #>>41901103 #>>41902728 #
175. sparcpile ◴[] No.41900206[source]
The AI bubble will pop in the next year. We are currently in 1998 of the dotcom bubble with another AI winter approaching. LLM and generative AI are this year’s “on the Internet” or “Uber for X” business plans.
replies(3): >>41900293 #>>41900395 #>>41904222 #
176. yowayb ◴[] No.41900207{3}[source]
That may be true now, but Europe has had a resurgence in rail usage growth recently, so I'm not sure it's true forever.
177. harimau777 ◴[] No.41900217[source]
Aren't humans the main alternative to AI and they seamlessly hallucinate as well.
178. segmondy ◴[] No.41900225[source]
Maybe, maybe not. Nvidia P40's GPUs were released 8 years ago, they are hot. Prices have doubled in the last year. Nothing will be written off in 6 years. Folks will be using A100 15 years from now. 7 yrs old V100 32gb GPU cards are still going for $1500 on ebay. We are more likely to invent a more efficient software architecture than invent a new type of better hardware and replace everything that exists.
replies(1): >>41904817 #
179. ffujdefvjg ◴[] No.41900253{7}[source]
As someone who doesn't generally program, it was pretty good at getting me an init.lua set up for nvim with a bunch of plugins and some functions that would have taken me ages to do by hand. That said...it still took a day or two of working with it and troubleshooting everything, and while it's been reliable so far, I worry that it's not exactly idiomatic. I don't know enough to really say.

What it's really good at is taking my description of something and pointing me in the right direction to do my own research.

(two things that helped me with getting decent code were to describe the problem and desired solution, followed by a "Does that make sense?". This seems to get it to restate the problem itself and produce better solutions. The other thing was to copy the output into a fresh session, ask for a description of what the code does and what improvements could be made)

replies(2): >>41900331 #>>41900420 #
180. rayxi271828 ◴[] No.41900255{7}[source]
Wouldn't AI be worse at Rust than at C++ given the amount of code available in the respective languages?
replies(1): >>41900497 #
181. dalyons ◴[] No.41900266{6}[source]
It’s not popular in Europe at all.
replies(1): >>41900320 #
182. ant6n ◴[] No.41900293[source]
Will the AI bubble pop bring down the rest of the startup industry, or allow for more investment in non AI tech.

Climate related tech needs more money.

replies(1): >>41904112 #
183. datavirtue ◴[] No.41900309[source]
I'm investing in undervalued businesses who are selling the shovels.
replies(1): >>41900537 #
184. ant6n ◴[] No.41900314{4}[source]
> people have to figure out how to make their life meaningful with only leisure time.

Like religion or nationalism fueling some wars.

185. synergy20 ◴[] No.41900320{7}[source]
France derives about 70% of its electricity from nuclear energy.

For Europe overall is 22%.

replies(1): >>41903810 #
186. skydhash ◴[] No.41900331{8}[source]
Not saying that it’s a better way, but I started with vim by copying someone conf (on Github), removing all extraneous stuff, then slowly familiarizing myself with the rest. Then it was a matter of reading the docs when I wanted some configuration. I believe the first part is faster than dealing with an LLM, especially when dealing with an unfamiliar software.
replies(1): >>41900535 #
187. nl ◴[] No.41900349{8}[source]
The thing is that OpenAI can choose to spend less on training at any time.

We've seen this before, with for example Amazon where they made a deliberate effort to avoid profitability by spending as much as possible on infrastructure until the revenue became some much they couldn't spend it.

Being in a position where you are highly cash-flow positive and it's strategic investment that is the cost seems like a good position.

replies(1): >>41903153 #
188. datavirtue ◴[] No.41900350{4}[source]
Spot on.
189. jmathai ◴[] No.41900379{3}[source]
There's a camp of people who are hyper-fixated on LLM hallucinations as being a barrier for value creation.

I believe that is so far off the mark for a couple reasons:

1) It's possible to work around hallucinations in a more cost effective way than relying on humans to always be correct.

2) There are many use cases where hallucinations aren't such a bad thing (or even a good thing) for which we've never really had a system as powerful as LLMs to build for.

There's absolutely very large use cases for LLMs and it will be pretty disruptive. But it will also create net new value that wasn't possible before.

I say that as someone who thinks we have enough technology as it is and don't need any more.

replies(3): >>41900417 #>>41900450 #>>41904140 #
190. datavirtue ◴[] No.41900395[source]
The market PE is very high and has been high for a while. Tech and AI are fueling a lot of that. Look what happened to Tesla when the fundamentals started getting a little discouraging. However, I'm not comfortable predicting the "AI bubble" popping next year.
191. datavirtue ◴[] No.41900408{4}[source]
A lot of it is green washing. Some of it might be signals to the competition or other vendors. This AI buildup is going to be a knock-down-drag-out.
192. datavirtue ◴[] No.41900417{4}[source]
Yeah, they just want it to go away. The same way they wish Windows and GUIs and people in general would just go away.
replies(1): >>41904205 #
193. p1esk ◴[] No.41900419{4}[source]
Nvidia is a software company that also happens to make decent hardware. People buy their hardware because of their software. And it’s not just CUDA. Nvidia is building a whole bunch of potentially groundbreaking software products. Listen to the last GTC keynote to learn more about it.
194. komali2 ◴[] No.41900420{8}[source]
The downside of this nvim solution is the same downside as both pasting big blobs of ai code into a repo, and, pasting big vim configs you find online into your vimrc: inability to explain the pasted code.

When you need something fast for whatever reason sure, but later when you want to tweak or add something, you'll have to finally sit down and learn basically the whole thing or at least a major part of it to do so anyway. Imo it's better to do that from the start but sometimes that's not always ideal.

replies(1): >>41901480 #
195. ◴[] No.41900426[source]
196. babyent ◴[] No.41900450{4}[source]
For sure, sending customers into a never ending loop when they want support. That's been my experience with most AI support so far. It sucks. I like Amazon's approach where they have a basic chat bot (probably doesn't even use LLMs) that then escalates to a actual human being in some low cost country.

I kind of like the Chipotle approach. I have a problem with my order, it just refunds me instantly and sometimes gives me a add-on for free.

Honestly I only use LLM for one thing - I give it a set of TS definitions and user input, and ask it to fit those schemas if it can and to not force something if it isn't 100% confident.

I know some people whose whole company is based around the use of AI to send emails or messages, and in reality they're logged into their terminals real time fixing errors before actually sending out the emails. Basically, they are mechanical turks and they even say they're looking at labor in India or Africa to pay them peanuts to address these.

197. reverius42 ◴[] No.41900497{8}[source]
Maybe this is a case where more training data isn’t better. There is probably a lot of bad/old C++ out there in addition to new/modern C++, compared to Rust which is relatively all modern.
replies(1): >>41904660 #
198. andxor ◴[] No.41900533{3}[source]
Indeed, downvoted advice has done great.
199. ffujdefvjg ◴[] No.41900535{9}[source]
I agree with this approach generally, but I needed to use some lua plugins to do something specific fairly quickly, and didn't feel like messing around with it for weeks on end to get it just right.
200. 0xDEAFBEAD ◴[] No.41900537{3}[source]
That's the funny thing about the AI boom. There's way more hype about shovel sellers than shovel buyers.

No one is getting excited about "AI slop", just the models that generate it. Funny situation.

replies(2): >>41902203 #>>41908970 #
201. 0xDEAFBEAD ◴[] No.41900553{4}[source]
>As the common refrain goes: "in a gold rush, sell shovels!"

What happens when everyone constructs a shovel factory, the shovels become dirt cheap, but there are no buyers?

202. lelanthran ◴[] No.41900633[source]
> Reading this makes me willing to bet that this capital intensive investment boom will be similar to other enormous capital investment booms in US history, such as the laying of the railroads in the 1800s, the proliferation of car companies in the early 1900s, and the telecom fiber boom in the late 1900s. In all of these cases there was an enormous infrastructure (over) build out, followed by a crash where nearly all the companies in the industry ended up in bankruptcy, but then that original infrastructure build out had huge benefits for the economy and society as that infrastructure was "soaked up" in the subsequent years.

> I think that will happen here.

Why? The rail network, road network and fiber network that was laid could be used for decades after their original investors went bust.

The current datacenters full of AI compute can't really be used for anything else if AI companies go bust.

That's the problem with investing in compute infrastructure - you need to have a plan to use it all up in the next 5 years, because after that you wouldn't even be able to give it away.

replies(3): >>41900917 #>>41902633 #>>41903793 #
203. dankwizard ◴[] No.41900666{4}[source]
"LLMs are great when you are doing the same things as everyone else. Step outside of that and it's far more trouble than it's worth."

If you're doing something in a way it's not in the training data set, maybe your way of approaching the problem is wrong?

replies(2): >>41902364 #>>41904523 #
204. attentive ◴[] No.41900693{4}[source]
for obscure API or SDK, upload docs and/or examples to Claude projects.
replies(1): >>41904540 #
205. shaklee3 ◴[] No.41900711{4}[source]
Most data center GPUs do not have game rendering hardware in them.
206. csomar ◴[] No.41900775{3}[source]
They have massively nerfed Copilot. I'm keeping my subscription for a couple more months but at this point, it has the same intelligence as the llma3.2 which I can run on my laptop.
207. danielmarkbruce ◴[] No.41900792{7}[source]
GM went bankrupt. Ford would have without government intervention. Each have had periods of profitability but they weren't ever anything like microsoft/google etc. Ford has underperformed the stock market average since it went public like 70 odd years ago. GM got so big in the first place via acquisitions, not because the business of cars lent itself to a dominant player.

Huge by itself isn't the same as huge winner.

replies(2): >>41902450 #>>41903849 #
208. _huayra_ ◴[] No.41900878{7}[source]
The best I have ever seen were obscure languages with very strong type safety. Some researcher at a sibling org to my own told me to try it with the Lean language, and it basically gave flawless suggestions.

I'm guessing this is because the only training material was blogs from uber-nerdy CS researchers on a language where "mistakes" are basically impossible to write, and not a bunch of people flailing on forums asking about hello world-ish stuff and segfaulting examples.

209. margalabargala ◴[] No.41900917{3}[source]
The renewable power infrastructure for those datacenters will still exist.

People will be able to buy those used GPUs cheap and run small local LLMs perhaps. A 10 year old computer today won't do state of the art games or run models, but is entirely acceptable for moderate computing use.

replies(3): >>41901582 #>>41902523 #>>41903756 #
210. ◴[] No.41900923{8}[source]
211. tightbookkeeper ◴[] No.41900926{8}[source]
I’m not even sure if they can make a website that takes text input to an executable and dumps the output.
212. ◴[] No.41900982{3}[source]
213. Terr_ ◴[] No.41900997{3}[source]
> better over time

The problem is all the most reliable code it can give you is stuff which ought to be (or already is) a documentation example or a reusable library, instead of "copy paste as a service".

replies(1): >>41904325 #
214. Terr_ ◴[] No.41901007{6}[source]
And if it's really well represented, then it's hopefully already in a superior library or documentation/guide, and the LLM is acting as an (untrustworthy) middleman.
replies(1): >>41907462 #
215. Terr_ ◴[] No.41901022{3}[source]
I disagree, at least one day of the week should be something else, for variety. :p
216. cma ◴[] No.41901033[source]
Still useful for anything you can verify.
replies(1): >>41904232 #
217. EZ-E ◴[] No.41901046[source]
So there is a boom in terms of investment for future capacity. It could be interesting to see if the demand is following. I suspect there will be over capacity built, resulting some bankruptcies + cheaper compute/ai costs, hopefully allowing the next generation of startups or companies to flourish later on.
218. Slartie ◴[] No.41901103{4}[source]
Nuclear power is probably the least valid choice of all the power source choices when the goal is to "have power available quickly".

There is no kind of power plant that takes longer to build than a nuclear plant.

219. fragmede ◴[] No.41901107{7}[source]
My data science friend tells me it's really good at writing bad pandas code because it's seen so much bad pandas code.

At the end of the day, it depends where you are in the hierarchy. Having it write code for me on a hobby project in react that's bad but works is one thing. I'm having a lot of fun with that. Having it write bad code for me professionally is another thing though. Either way, there's no going back to before ChatGPT, just like there's no going back to before Stack Overflow or Google. Or the Internet.

220. torginus ◴[] No.41901110{4}[source]
Honestly I don't like these AI doom scenarios, mostly because I feel like they're used as a means by the people in charge to shift the Overton window of AI outcomes - as in, the AI might have built an ubiquitious police state with all the wealth being owned by the 0.001%, but at least it didn't kill us all!

Honestly, I feel like with the recent progress of AI, it's a realistic scenario to assume it will replace mpst knowledge workers in the next 5 to 10 years, probably won't replace researchers and other elite intellectuals, and won't even make a dent in the world of physical labor.

In that world, I see AI as harmful, but the people in charge won't, as they are directly benefiting from it.

We live in number-go-up capitalism. A good analog is the housing situation. The ever increasing price of real estate means that the total amount of wealth goes up, so it's seen as a beneficial process by the elites. The rest however will find that they need to dedicate a larger proportion of their income towards getting a roof over their heads, and think this process is bad.

Nowadays, the possibility building a life that would've been considered middle class half a century ago from scratch is available to like 10% of workers, working mainly intellectual jobs.

AI in the future will reduce the proportion of these people by taking away their high paying jobs.

221. aurareturn ◴[] No.41901274{9}[source]
The LLM would become the OS.
replies(2): >>41901726 #>>41902566 #
222. ehnto ◴[] No.41901372{3}[source]
A viable strategy for making money, or providing value to society?

I think for some niches, the former can for a brief period precede the latter. But eventually the market catches up and roots out that which lacks actual value.

More concretely, I suspect the advertising apparatus is going to increasingly devalue unattributed content online, favouring curated platforms and eventually resembling a more hands on media distribution with human platform relationships (where media == the actual medium of distribution not content).

That is already a thing, where for example an instagrammer promoting your product is more valuable than the automated ad-network on instagram itself.

At which point, hopefully, automated content and spam loses legitimacy and value as ad-media.

223. FeepingCreature ◴[] No.41901386{8}[source]
I mean, I agree with that. That certainly matches my experience. I just don't think the deciding factor is "simpleness" so much as an inability to handle large scale at all.

My point is more that LLMs can handle (some) projects that are useful. It's not just oneliners and hello worlds. There's a region in between "one-page demos" and "medium-sized codebases and up" where useful work can already happen.

224. FeepingCreature ◴[] No.41901393{6}[source]
Eh, it's a bit hacked together sure. I find it easy to read?
replies(1): >>41906085 #
225. throwaway2037 ◴[] No.41901413{6}[source]

    > 2006 property crash
Assuming you are talking about the US financial crisis, do you mean 2008, instead of 2006? As I recall, easy money (via mortgages) was still sloshing about, well into 2007.
replies(1): >>41901652 #
226. sofixa ◴[] No.41901466{4}[source]
> Glean.com does it for the enterprise I work at: It consumes all of our knowledge sources including Slack, Google docs, wiki, source code and provides answers to complex specific questions in a way that’s downright magical

There are a few other companies in this space (and it's not something that complex to DIY either); the issue is data quality. If your Google Docs and wikis contain obsolete information (because nobody updated them), it's just going to be shit in, shit out. Curating the input data is the challenging part.

227. shwaj ◴[] No.41901480{9}[source]
When I’ve used AI for writing shell scripts it used a lot of syntax that I couldn’t understand. So then I took the time to ask it to walk me through the parts that I didn’t understand. This took longer than blindly pasting what it generated, but still less time than it would have using search to learn to write my own script. With search, a lot of time is spent guessing the right search term. With chat, assuming it generated a reasonable answer (I know: a big assumption!), my follow-up questions can directly reference aspects of the generated code.
replies(1): >>41902311 #
228. flakeoil ◴[] No.41901582{4}[source]
But in terms of compute/watt those 10 year old data centers are going to suck and that is what counts for a data center.
229. jiggawatts ◴[] No.41901612{4}[source]
Agreed! One thing I noticed is that the LLM craze seems to have triggered some other developments in only vaguely related fields.

My favourite example is the astonishing pace with which reverse-rendering technology has progressed. It started with a paper by NVIDIA showing projections of 2D photos being "fitted" into a 3D volume of differentiable hashtables, and then the whole thing exploded when Guassian Splats were invented. I full expect this niche all by itself to generate a huge variety of practical applications. Computer games and movie special effects, obviously, but also AR/VR, industrial uses, mapping, drone navigation, etc...

230. tim333 ◴[] No.41901652{7}[source]
Yeah that one.
231. marcus_holmes ◴[] No.41901726{10}[source]
An LLM cannot "become" an OS. It can have an OS added to it, for sure, but that's a different thing. LLMs run on top of a software stack that runs on top of an OS. Incorporating that whole stack into a single binary does not mean it "becomes" an OS.

And the point stands: you would not write a new OS, even to incorporate it into your LLM. You'd clone a BSD (or similar) and start there.

replies(1): >>41902470 #
232. xvilka ◴[] No.41902188{5}[source]
People are absolutely irrational creatures.
replies(2): >>41903832 #>>41904013 #
233. tim333 ◴[] No.41902203{4}[source]
The hypothesis is the AI slop will improve. A bit like 1990s internet where there was a lot of futzing with dial up modems to eventually get a fairly crappy web page. But you could tell it would get better.
234. PunchTornado ◴[] No.41902266[source]
if you were to invest in datacenter stocks (AI datacenters builders), what companies would be good options?
235. vrighter ◴[] No.41902275{8}[source]
even then, the llm cannot possibly be a standalone os. For one thing, it cannot execute loops. So even something as simple as enumerating hardware at startup is impossible.
236. vrighter ◴[] No.41902311{10}[source]
having something explained to me has never helped me retain the information. That only happens if i spend the time actually figuring out stuff myself.
237. walterbell ◴[] No.41902344{3}[source]
No detail on founding team or investors. Unbounded liability. Compare with real organizations in eldercare:

Commercial: https://lotsahelpinghands.com

Non-profit: https://www.caringbridge.org

replies(1): >>41902585 #
238. Netherland4TW ◴[] No.41902357[source]
Crazy how all of your previous comments have been promoting one app/"tool" after another
239. threeseed ◴[] No.41902364{5}[source]
Sorry but some of us aren't building the ten millionth CRUD app.

SuccessFactors is a popular HR platform and I was asking it any question and getting the wrong answer every time.

240. CaptainFever ◴[] No.41902433{4}[source]
Not what enshittification means.
241. aurareturn ◴[] No.41902450{8}[source]
That's recent. Ford was founded in 1903. GM in 1908.

GM was America's largest employer as recently as the 90s.

replies(1): >>41906474 #
242. aurareturn ◴[] No.41902470{11}[source]
I don't think you're getting the main point. The only application that this physical device would run is ChatGPT (or some successor). You won't be able to install other apps on it like a normal OS. Everything you do is inside this LLM.

Underneath, it can be Linux, BSD, Unix, or nothing at all, whatever. It doesn't matter. That's not important.

OS was just a convenient phrase to describe this idea.

replies(2): >>41903237 #>>41910102 #
243. lelanthran ◴[] No.41902523{4}[source]
> People will be able to buy those used GPUs cheap and run small local LLMs perhaps.

Maybe; I find it unlikely though, because unlike CPUs, there's a large difference in compute/watt in subsequent generations of GPUs.[1]

I would imagine that, from an economics PoV, the payback for using a newer generation GPU over a previous generation GPU in terms of energy usage is going to be on the order of months, not years, so anyone needing compute for more than a month or two would save money by buying a new one at knockdown prices (because the market collapsed) than by getting old ones for free (because the market collapsed).

[1] Or maybe I am wrong about this - maybe each new generation is only slightly better than the previous one

244. glimshe ◴[] No.41902566{10}[source]
The LLM can't abstract PCI, USB, SATA etc from itself.
replies(1): >>41904964 #
245. badgersnake ◴[] No.41902585{4}[source]
They appear to be a brand of a Czech company AI Touch - https://aitouch.cz/

I could not find them on TechCrunch

246. matthewdgreen ◴[] No.41902626{6}[source]
I cannot tell you how much this echoes what people were saying during the dot com days :) Of course back then it was browsers and not LLMs. Looking back, people were both correct about this, yet we’re still having the same conversation about replacing the OS cartel.
247. throw234234234 ◴[] No.41902728{4}[source]
> But most of the value will be in software ultimately.

Isn't one of the points of AI to make democratize the act of writing software? AI isn't like other software inventions which make a product from someone's intelligence - long term its providing the raw intelligence itself. I mean we have NVDA's CEO saying to not learn to code, and lot of non-techies quoting him these days.

If this is true the end effect is to destroy all value moats in the software layer from an economic perspective. Software just becomes a cheap tool which enables mostly other industries.

So if there isn't long term value in the hardware (as you are pointing out), and there isn't long term value in the software due to no barriers of entry - where does the value of all of this economic efficiency improvement accrue to?

I suspect large old stale corporations with large work forces and moats outside of technology (i.e. physical and/or social moats) not threatened by AI, who can empower their management class by replacing skilled (e.g software dev's, accountants, etc) and semi-skilled labor (e.g call centre operators) with AI. The decision makers in privileged positions behind these moats, rather than the do'ers will win out.

replies(1): >>41905217 #
248. throw234234234 ◴[] No.41902847{6}[source]
That is true all else remaining equal - but that isn't usually the case. Lower rates do increase the demand for governments generally to borrow as well at a faster rate via lower IR rates.

The amount of interest payments therefore long term (not short term) isn't really affected by the IR rate but more by politics and the amount of IR payments/debt burden they can politically get away with - in the US it is a LOT - in other countries the political appetite can be less.

So while it is true that higher IR payments do increase the money supply, generally with lower IRs governments are encouraged to "borrow more" by many stakeholders to their capacity under the low rate anyway. For example I saw many newspaper articles around our local media stating things like "rates are low, the government should invest that in infrastructure/disability programs/{insert favorite idea here}, etc when rates were low with politicians happy to spend accordingly.

In addition under low IR's the private sector will borrow more increasing the amount of credit in the economy as well - also inflationary money supply.

There's always nuances; these black and white theories can be dangerous. They assume all else is equal which is rarely ever is.

249. HarHarVeryFunny ◴[] No.41903153{9}[source]
I don't know how you can compare Amazon vs OpenAI on the fundamentals of the two businesses. It's the difference in fundamentals that made Amazon a buy at absurd P/Es, as well as some degree of luck in AWS becoming so profitable, while OpenAI IMO seems much more of a dodgy value proposition.

Amazon were reinvesting and building scale, breadth and efficiency that has become an effective moat. How do you compete with Amazon Prime free delivery without your own delivery fleet, and how do you build that without the scale of operations?

OpenAI don't appear to have any moat, don't own their own datacenters, and the datacenters they are using are running on expensive NVIDIA chips. Compare to Google with their own datacenters and TPUs, Amazon with own datacenters and chips (Graviton), Meta with own datacenters (providing value to their core business) and chips - and giving away the product for free despite spending billions on it ... If this turns into the commodity business that it appears it may (all frontier models converging in performance), then OpenAI would seem to be in trouble.

Of course OpenAI could stop training at any time, but to extent that there is further performance to be had from further scaling and training, then they will be left behind by the likes of Meta who have a thriving core business to fund continued investment and are not dependent on revenue directly from AI.

250. ViewTrick1002 ◴[] No.41903154{6}[source]
We built a lot of nuclear back in the 70s and 80s which we still rely on with long term operation upgrades.

For modern nuclear power the only nuclear reactor under construction in France is Flamanville 3 which is 6x over budget and 12 years late on a 6 year construction timeline.

Hinkley Point C in the UK is in a similar quagmire and Olkiluoto 3 finally got finished last year after a near 20 year construction timeline.

Politically there's some noise from conservative politicians who can't hold a climate change denial position anymore, but still need to be contrarians.

The problem is the horrendous economics.

251. James_K ◴[] No.41903200[source]
My thoughts exactly. One of the best ways to develop countries is to just invest in a bunch of infrastructure. It probably won't be optimal, but it's better than not investing. It's interesting that the private bubbles in this case form a simulacrum of public investment. Instead of the government raising money to invest through taxes, capitalists just throw it away on fads. Perhaps it even helps to address inequality in the long run.

That said, I'm not sure the effect of digital infrastructure will be the same as physical infrastructure. A road has a clear material impact on all businesses in the area and their capacity to produce physical goods. But do data centres have the same effect? An extra lane on the road means you can get a greater volume of goods in and out to broaden operations to a larger area, but I don't see what positive effect two data centres could have on the average business. For as great as the internet is, I don't know how much value is created here. The question of what to do with a railroad is quite easily answered, but I'm not really sure what you can do with a datacentre. I guess whoever works it out will be decently rich.

But I feel we already have enough computing power, and the bottleneck in the whole process is making software that efficiently uses it (or knowledge of how to operate such software), rather than the power of devices themselves. Though perhaps as the bubble bursts, the price of programmers will also decrease significantly and the software issue will be resolved.

252. guitarlimeo ◴[] No.41903237{12}[source]
I got your main point from the first message, but still don't like redefining terminology like OS to mean what you did.
replies(2): >>41904202 #>>41904933 #
253. simonh ◴[] No.41903291{8}[source]
This is a similar situation to the view that the web would replace operating systems. All we'd need is a browser.

I don't think AI is ultimately even an application, it's a feature we will use in applications.

replies(1): >>41904880 #
254. Wheatman ◴[] No.41903351{4}[source]
>GPUs are different, unless things go very poorly, these GPUs should be pretty much obsolete after 10 years.

Not really much for gaming, especially here in third world cou tries, OLD or Abandoned Gpu's are basically all that is avalaible for use, from anything from gaming to even video editing.

Considering how many new great games are being made(and with the news of nvidia drivers possibly becoming easily avalaible on linux.), and with tech becoming more avalaible and usefull on these places, i expect there to be a somewhat considerable increase in demand for GPU in say here in africa or southeast asia.

It probably wont change the world or us economy, but it would pribably make me quite happy if the bubble were to burst, even as a supporter of AI in research and Cancer detection.

replies(1): >>41904080 #
255. btbuildem ◴[] No.41903363[source]
Of course it's similar: they're all instances of a capitalist "boom -> boost -> quit" cycle. At least this frontier is mostly digital in terms of resources being destroyed.

Maybe we will get a nuclear energy renaissance out of this, who knows.

256. wkat4242 ◴[] No.41903416[source]
Yeah this is the thing. Investors are only looking for the next bitcoin these days. Something that will pay back 10.000-fold in 2 years. They're cheering each other on and working themselves up drooling over the imagined profits. They no longer have any long-term vision.

If it doesn't meet those sky-high expectations it's a flop.

The same happened with metaverse, blockchain etc. Those technologies are kinda shitcanned now which is unfair too because they have excellent usecases where they add value. It was never going to be for everyone, and no we weren't all going to run around with an oculus quest 24/7.

I think these investors break it more than they do good.

257. _heimdall ◴[] No.41903541{4}[source]
We'd be much better off using less power rather than finding different sources of power though. I also personally prefer nuclear energy to coal, but is our best chance really to come up with a new technology so power hungry that we have to build nuclear just for the new demand?
replies(1): >>41903943 #
258. rsynnott ◴[] No.41903550{3}[source]
> Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat? We don't have this yet - most likely because it's probably still too expensive to do this much inference with such high context window.

So, your problem there is 'reliable'. LLMs, fairly fundamentally, cannot do 'reliable'. If you're looking for reliable, you likely are looking at a different tech entirely.

replies(1): >>41903843 #
259. _heimdall ◴[] No.41903569[source]
I'm always surprised when articles and discussions similar to this lead to anything other than the realization of how absurd it is that we have found ourselves investing unimaginable resources into a LLMs while simultaneously claiming that we've ruined the planet and it could be as little as 5 or 6 years from fundamental damage.

Eventually we have to either give up on the hopes of what could come from LLMs with enough investment, or give up on very loud but apparently hollow arguments related to the damage we are causing to the planet.

replies(4): >>41903595 #>>41903612 #>>41903888 #>>41907288 #
260. CharlieDigital ◴[] No.41903589[source]
It's not a hard problem to solve with even basic retrieval augmented generation.

With good RAG, hallucinations are non-existent.

261. gchokov ◴[] No.41903595[source]
Or maybe, just maybe, now that we have a need for more energy, we can finally come up with more sustainable ways of producing that energy?
replies(2): >>41903620 #>>41903932 #
262. tempfile ◴[] No.41903612[source]
I don't understand this comment. Who is "we"? There is one segment of society ringing alarm bells about irreversible damage to the environment, and another that seems determined to make number go up as fast as possible with no consideration (or considered disregard) of the effect on human life more broadly.

The people driving AI investment will simply not be significantly affected by climate change. They don't care that hundreds of millions in the tropics will die, and that much of organised human activity will collapse, because up until the last possible moment they'll be insulated from the consequences.

replies(1): >>41903970 #
263. tempfile ◴[] No.41903620{3}[source]
Why would needing more energy make it easier to satisfy the total demand for energy with renewable sources?
replies(1): >>41907024 #
264. rsynnott ◴[] No.41903649{6}[source]
> I was thinking re the data in the tweet, that there were a lot of mentions of "soft landing" before the dot com crash, before the 2006 property crash and now

There's a confirmation bias there, though. Economists, particularly pop economists, have predicted all 20 of the last two recessions; if you just say "the world is going to end" every year, then occasionally it kinda will, and certain people will think you're a visionary.

replies(1): >>41908463 #
265. rsynnott ◴[] No.41903672{3}[source]
> but nuclear power, being expensive, would not be built if this bubble had not happened, and somehow nuclear does seem to be getting some of this money

Eh, it's generally SMRs, which remain kinda vapourware-y. I'd be a little surprised if anything concrete comes of it, tbh; I suspect that it is mostly PR cover for reactivating coal plants and the like. (The one possibly-real thing might be the Three Mile Island restart, but that in itself isn't particularly consequential.)

266. rmbyrro ◴[] No.41903674[source]
The lifetimes are much shorter in IT infra compared to the rail industry, yes, but the margins are astonishingly higher in IT as well.

While it usually takes decades to payback rail investments, it usually happens within few years in the IT industry.

replies(1): >>41903906 #
267. infecto ◴[] No.41903712[source]
While they certainly can do that, there are large chunks of workflows where "hallucination" are low to none. Even then, I find LLM quite useful to ask questions in areas I am not familiar with, its easy to verify and I get to the answer much quicker.

Spend some more time working with them and you might realize the value they contain.

268. johnnyanmac ◴[] No.41903756{4}[source]
>People will be able to buy those used GPUs cheap and run small local LLMs perhaps.

That's not really how SaaS works these days. It will be a "cheap" subscription or expensive and focused on enterprise. Both those require maintance costs, which ruin the point of "cheap and small run LLM's".

And they sure aren't going to sell local copies. They'd rather go down with their ship than risk hackers dissecting the black box.

replies(1): >>41905815 #
269. johnnyanmac ◴[] No.41903765{4}[source]
Lots of ifs going on here. I haven't been so optimistic of tech this decade compared to the early '10's.
270. JKCalhoun ◴[] No.41903766{4}[source]
Doesn't that say more about Nvidia than it does about AI in general?

But I see your point. And yes, I think it has trickled down to the mainstream.

(By that metric, I guess Bitcoin crashed a few years ago.)

271. whywhywhywhy ◴[] No.41903781{9}[source]
Would be extremely surprising if it were anything other than an Android fork. The differentiator is gonna be the LLM, always on listening and the physical interface to it.

You're just burning money bothering to rewrite the rest of the stack when off the shelf will save you years.

272. JKCalhoun ◴[] No.41903793{3}[source]
> The current datacenters full of AI compute can't really be used for anything else if AI companies go bust.

That's hard to know from this vantage point in the present.

Who knows what ideas will spring forth when there are all these AI-capable data-centers sitting out there on the cheap.

replies(1): >>41907055 #
273. johnnyanmac ◴[] No.41903794{4}[source]
That's my big worry. The Internet was made with an idea of being a common. LLM's are very much built with mentalities of trade secrets, from their data aquisition to the algorithms. I don't think such techniques will proliferate for commercial use as easily when the bubble bursts.
274. rsynnott ◴[] No.41903810{8}[source]
That French capacity was largely built a long time ago, though. Only a couple of nuclear plants have been built on Europe in the last decade, and they've generally overrun _horribly_ on costs.
275. tehjoker ◴[] No.41903832{6}[source]
Capitalism*
276. bamboozled ◴[] No.41903838[source]
I was thinking about it today, it's absolutely wild we're building nuclear to fuel this boom alone. If it doesn't pan out as we expect, what is to happen to all this nuclear investment ? Sounds positive to me.
277. williamcotton ◴[] No.41903843{4}[source]
If an LLM is tasked with translating a full text to a summary of that text then it is very reliable.

This is akin to an analytic statement, eg, "all bachelors are married". The truth is completely within the definition of the statement. Compare this to a synthetic statement such as "it is raining outside". In this case the truth is contingent on facts outside of the statement itself.

When LLMs are faced with an analytic statement they are more reliable. When they are faced with a synthetic statement they are prone to confabulate and are unreliable.

replies(1): >>41904075 #
278. gloflo ◴[] No.41903844{4}[source]
As global society we must strive to reduce energy consumption, not to find new use cases for burning more energy. Our planet has limited resources.
279. SJC_Hacker ◴[] No.41903847{3}[source]
The data centers themselves with all the supporting infrastructure (telecom/power), as well as all the chip fabs needed to build the hardware, even if the hardware itself becomes obsolete / breaks down on a 4-5 year time scale.

In the same way the rights of way obtained with all the railroads were, even if the rails / engines themselves had to be replaced every decade or so

But some hardware does last quite a long while. Fiber laid from 25 years ago is still pretty useful.

replies(1): >>41903858 #
280. johnnyanmac ◴[] No.41903849{8}[source]
>GM went bankrupt.

I'd call an 80 year run pretty damn good. having my company not just survive me and my children, but dominate an industry for that time seems like a good deal. It shows it wasn't my fault it failed.

>the business of cars lent itself to a dominant player.

I'd rather measure my business by impact, not stock numbers. That mentality is exactly why GM fell very slowly through the 70's (defying a while bunch of strategies Ford implemented to beat out the compeition. like knowledge retention ) and crashed by the 90's.

Money to keep operating is important too, but I don't think Ford lived a life of Picasso here.

replies(1): >>41906460 #
281. llamaimperative ◴[] No.41903853{6}[source]
That... is not relevant. The question is what percentage of R&D spend gets "encoded" into something that can survive the dissolution of its holding company and how much does a transfer to a new owner depreciate it.

I'd be shocked if more than like 20% of the VC money going into it would come out the other side during such an event.

282. ◴[] No.41903858{4}[source]
283. johnnyanmac ◴[] No.41903883{4}[source]
Every sector has its story like that. bankruptcy for a huge business isn't the same as an individual doing it.

And yea, it will vary. Amazon crashed hard on stocks through the 2000's. Google completely thrived. they are still considered on the same standing today as a trillionaire tech company

284. Ylpertnodi ◴[] No.41903888[source]
>give up on very loud but apparently hollow arguments related to the damage we are causing to the planet.

Which 'hollow arguments' are you referring to?

replies(1): >>41903923 #
285. johnnyanmac ◴[] No.41903906{3}[source]
I don't think "margins" are a very reassuring sentiment to customers and workers when it comes to considering the long term ramifications of a product.
286. irunmyownemail ◴[] No.41903917[source]
A few thoughts, Netflix works fine over an old, slow DSL connection. Fossil fuels aren't going away this century, especially not with power hungry AI (setting aside discussion on whether AI is truly worth it).
replies(1): >>41904484 #
287. _heimdall ◴[] No.41903923{3}[source]
I was referring to climate arguments that we need to reduce our impact to at least mitigate the severity of issues being predicted for the very near future.

I call them apparently hollow in this context because we can't both chase the resource behemoth that is LLM tech and make any meaningful change to reduce our impact.

288. _heimdall ◴[] No.41903932{3}[source]
We don't have that tech though.

Its a reasonable hope that we could discover a new energy source that can produce orders of magnitude more energy with even less impact than today's sources, but that is just a hope. In the meantime we would be committing ourselves to a new, much higher baseline of energy needs whether we make that discovery or not.

replies(1): >>41905308 #
289. johnnyanmac ◴[] No.41903943{5}[source]
That's sadly how we advance into new tech, historically speaking. Humanity only put a man on the moon as a case of showboating to political rivals. And look how we iterated on that 60 years later (I'm aware the moon landing was more or less stuck together with bubble gum and hopes/prayers, but still.). the uses for rocket propulsion to serve the public only came later.

Someone really important or really rich needs to build that demand so we get something for the wrong reasons but for potential good intent. FWIW, I'm not really optimistic that the bubble lasts long enough to even get these plants off the planning stage, though.

replies(1): >>41904437 #
290. _heimdall ◴[] No.41903970{3}[source]
"We" in this case would be the broader collective society, but also more specifically the very leaders pushing the LLM industry forward.

Big tech companies have a long list of promises made over the last 5-10 years promising huge cuts in their environmental impact. Those same companies and their leaders largely abandoned those goals.

I didn't really have political leadership in mind when writing that comment, though they could be part of that "we" as well.

> The people driving AI investment will simply not be significantly affected by climate change. They don't care that hundreds of millions in the tropics will die, and that much of organised human activity will collapse, because up until the last possible moment they'll be insulated from the consequences.

We've spent 80 years globalizing economies in an effort to avoid another world war. We'll all be impacted by it if some of the climate predictions are accurate.

Edit: to add that many of the same leaders developing LLMs make claims that LLMs and AI (if we get there) may be our only hope for finding ways of reversing our environmental impact. Either they are making that up as a sales pitch or they do in fact fall into the "we" here of people that care deeply about our impact while simultaneously burning massive amounts of resources on the hope that LLMs may fix it for us.

291. johnnyanmac ◴[] No.41904013{6}[source]
> Man likes to think of himself as a rational animal. However, it is more true that man is a rationalizing animal, that he attempts to appear reasonable to himself and to others. Albert Camus even said that man is a creature who spends his entire life in an attempt to convince himself that he is not absurd.

-Elliot Aronson, "The Rationalizing Animal"

Or to be less philisophical: the people with the money and power to say what's important are rarely the ones thinking long term, nor in an audience of other powerful, rich people thinking long term. US congress' median age is over 60: most aren't thinking about how to keep the Earth alive in 20-30 years. They won't be around to suffer the consequences.

292. rsynnott ◴[] No.41904075{5}[source]
> If an LLM is tasked with translating a full text to a summary of that text then it is very reliable.

Hrm. I've found LLM summaries to be of... dubious reliability. When someone posts an article on this here orange website, these days someone will sometimes 'helpfully' post a summary generated by a magic robot. Have a look at these, sometime. They _often_ leave out key details, and sometimes outright make shit up.

Interesting article on someones' experiences with this recently: https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actu...

replies(1): >>41904564 #
293. johnnyanmac ◴[] No.41904080{5}[source]
Not really sure if it'd do much. Old GPUs meant you're playing old games. That was fine in a time where consoles stalled the minimum spec market for 8+ years, but this decade left that behind. I imagine that very few high or even mid end gmaes made in 2030 would really be functional on any 2020 hardware.

So the games that work are probably out of support anyway. So there's no money being generated for anyone.

replies(2): >>41904594 #>>41904957 #
294. johnnyanmac ◴[] No.41904112{3}[source]
Given the direction of the economy, we'd hit a recession and no one will be investing. So a loss for all tech. I argue we already are in one, but the US election cycle wants to delay that reality until 2025.

Though the US did do a pretty big rate cut. I imagine that will at least stall such a bubble burst into 2026 instead.

295. johnnyanmac ◴[] No.41904140{4}[source]
the most important aspect of any company worth its salt is liability. If the LLM provider isn't providing liability (and so far they haven't), then hallucinations are a complete deal breaker. You don't want to be on the receiving end of a precedent setting lawsuit just to save some pennies on labor.

There can be uses, but if you you're falling on deaf ears as a B2B if you don't solve this problem. Consumers accept inaccuracies, not businesses. And that's also sadly where it works best and why consumers soured on it. It's being used to work as chatbots that give worse service, and make consumers work more for something an employee could resolve in seconds.

as it's worked for millenia, human have accountability, and any disaster can start the PR spin by reprimanding/firing a human who messes up. We don't have that for AI yet. And obviously, no company wants to bear that burden.

296. aurareturn ◴[] No.41904202{13}[source]
Think of iOS and everything that it does such as downloading apps, opening apps, etc. Replace all of that with ChatGPT.

No need to get to the technicals such as whether it's UNIX or Linux talking to the hardware.

Just from a pure user experience standpoint, OpenAI would become iOS.

297. johnnyanmac ◴[] No.41904205{5}[source]
I'm just tired of all the lies and theft. People can use the tech you want. Just don't pretend it's yours when you spent decades strengthening copyright law then you decide to break the laws you helped make.
replies(1): >>41906566 #
298. arminiusreturns ◴[] No.41904222[source]
I don't think it will.

The economy bubble will be popped post-election (you will know when the Fed starts raising rates again), but CRE (commercial real estate) is likely the catalyst this time.

Within CRE, datacenters are the only item in the green in investors eyes, even more now because of the AI boom. I fully expect that class of investor to then dump more money than ever into energy generation and other sectors related to AI in order to escape the planned crash.

The biggest variable is if the supranational oligarchs are wanting to use this crash to cause a much more major shift in monetary policy such as CBDCs.

replies(1): >>41905126 #
299. johnnyanmac ◴[] No.41904232{3}[source]
verifying needs experts to confirm. experts are expensive and are the people they want to replace. No one on either side of the transaction wants to utilize an expert.

So you see the issue. and the intent.

replies(2): >>41907223 #>>41908177 #
300. johnnyanmac ◴[] No.41904280{3}[source]
you're missing one big detail. Humans are liable, AI isn't. And AI providers do all they can to deny liability. The businesses using AI sure aren't doing better either.

If you're not confident enough in your tech to be held liable, we're going to have issues. We figured out (sort of) human liability eons ago. So it doesn't matter if it's less safe. It matters that we can make sure to prune out and punish unsafe things. Like firing or jailing a human.

301. johnnyanmac ◴[] No.41904296{3}[source]
I wish they used AI. It'd feel less artificial than the scripts investors give them to keep the boom booming.

It's a gold rush and they are inspectors. They have an incentive to keep the rush flowing.

302. dash2 ◴[] No.41904320{6}[source]
Good comment. From Apple's point of view, AI could be a disruptive innovation: they've spent billions making extremely user-friendly interfaces, but that could become irrelevant if I can just ask my device questions.

But I think there will be a long period when people want both the traditional UI with buttons and sliders, and the AI that can do what you ask. (Analogy with phone keyboards where you can either speech-to-text, or slide to type, or type individual letters, or mix all three.)

303. johnnyanmac ◴[] No.41904325{4}[source]
If AI could generate better documentation for my domain tools, I'd take back maybe 75% of my criticisms for it.

But alas, this rush means they want to pitch to replace people like me, not actually make me more productive.

304. johnnyanmac ◴[] No.41904375{4}[source]
it's a tangent, but the title inflation and Years of Experience really are horrible metrics these days to judge engineers. Especially in an age where employers actively plan for 2-3 year churn instead of long term retention.

I have no clue how you get 5 years of experience in any meaningful way on any given tech. You sure won't get that only from the workplace's day to day activities. YoE is more a metric of how much of a glutton for punishment you have more than anything.

305. _heimdall ◴[] No.41904437{6}[source]
> FWIW, I'm not really optimistic that the bubble lasts long enough to even get these plants off the planning stage, though.

I don't really expect the current LLM bubble to last long enough to stand up new nuclear plants, though I don't expect that to actually stop the new energy projects unless the bubble popping has a massive economic impact.

Power plants are slow moving projects. Even if LLMs as they are don't live up to expectations they have seemed to open the door for the idea that amazing things are coming and we need to make sure the energy supply is ready for it.

replies(1): >>41904991 #
306. johnnyanmac ◴[] No.41904441{4}[source]
Most of my complaints with AI are ethical and legal. But damn me if "products" like this doesn't bring out the bits of luddite in me. Not just to the seller but anyone considering to buy this.

Everyone's dreams will differ, but I got into tech to make people more efficient, and in turn enable more of the human element and less pencil pushing. Not replace it entirely.

307. baby_souffle ◴[] No.41904484{3}[source]
> A few thoughts, Netflix works fine over an old, slow DSL connection.

debatable depending on what "fine" means. In any case, DSL really doesn't go far.

The old version from the early 2000's could work out to a few miles / km but only with absolutely perfect condition copper. The newer DSL versions are limited to much less distance even with good quality cable.

Each neighborhood has a head-end that does the copper <-> fiber transition. Unless you lived _really_ close to the Central Office, your DSL service was probably copper only for a few blocks before it transitioned to fiber going from TelCo central office to all the individual DSLAMs scattered about.

308. johnnyanmac ◴[] No.41904523{5}[source]
>If you're doing something in a way it's not in the training data set

in my industry, the "training data set" won't get much farther from public code than the barebones, generated doxygen comments we call "documentation".

But in a way you're also right. The industry's approach is fundamentally wrong, making 20 solutions to a problem with plenty of room to standardize a proper approach (plenty of room where you need proprietary techniques, but that's getting less true by the month). But an LLM isn't going to fix that cultural issue and will suffer from it.

replies(1): >>41905978 #
309. johnnyanmac ◴[] No.41904540{5}[source]
That sounds like you might potentially break some copyright of your tools. Not all our tools are FOSS (and even some FOSS licenses may not allow that).
310. johnnyanmac ◴[] No.41904561{5}[source]
Nope. But AI's sales pitch is that it's an oracle to lean on. Which is part of the problem.

As a start, let me know when an AI can fail test cases, re-iterate on its code to correct the test case, and re-submit. But I suppose that starts to approach AGI territory.

311. williamcotton ◴[] No.41904564{6}[source]
Sure, anecdotal evidence. Here's another anecdote:

Original article: https://osa1.net/posts/2024-10-09-oop-good.html

LLM (ChatGPT o1-preview) results: https://chatgpt.com/share/67166301-00d8-8013-9cf5-e8a980aca7...

LGTM!

I'd like to know which model is used in the article you've referenced as well as the prompt. I also suspect that 50 pages is pushing up to the limits of the context window and has an impact on the results.

---

As for the article itself... Use a language like F# or OCaml and you get a functional-first language that also supports OOP!

312. 7thaccount ◴[] No.41904566[source]
The amount of power needed for data centers over the next decade is estimated at like 80+ GW of growth by 2030 which is insane.

It'll either prompt serious investment in small modular reactors and rescuing older nuke plants about to retire, or we'll see a massive build out in gas. These companies want the power to be carbon free, so they're trying to do the former, but we'll see how practical that is. Small modular reactors are still pretty new and nobody knows how successful that will be.

At the end of the day, I feel like this will all crash and burn, but we may end up with some kind of nuclear renaissance. We're also expanding the transmission grid and building more wind, solar, and storage. However, I don't think that alone is going to satisfy the needs of these data centers that want to run nearly 24/7.

replies(1): >>41904853 #
313. Wheatman ◴[] No.41904594{6}[source]
True, hardware requirements tend to increase, but what says we arent reqching another plateau already, especially with the newest consoles only recently being released, not to mention how these data centers tend to run the latest GPU'S, so depending on when the bubble bursts(Im guessing aroudn 2026, 2027 or later due to the current USA election, where most of these data centers are located), it wouldnt be off to say that a cutting edge RTX9999 gpu from 2026 can run a 2032 game on medium or maybe hugh settings quite well.

Im more than happy to play 2010 to 2015 ganes right now at low settings, it would be even better to play games that are 5 years away rather than 10.

The same can be said for rendering, professional work, and server making, something is better than nothing, and most computers here dont even have a seperate gpu and opt for integrated graphics.

314. _fat_santa ◴[] No.41904605[source]
Regarding software, one thing I've noticed looking at YC and Product Hunt these days is that pretty much all the software being hyped now is "AI Powered...something".

I find it quite annoying because for every company that is doing something where AI would actually be useful, there are 10 that are shoving it into their existing app in some capacity to make their software "AI Powered". A perfect example of this is my company recently evaluated Zenhub. Their sales team was very eager to point out that their app was using AI though when we actually looked, all it did was generate story descriptions from a prompt, the most basic of AI integrations.

AI is very useful but my god not everything needs to have it baked in.

replies(1): >>41905046 #
315. johnnyanmac ◴[] No.41904606{5}[source]
>raising rate would bring the economy to a slower pace and reduce private sector consumer demand.

Tech companies decided to respond with lower consumer demand by using price hikes, though. And of course letting go of labor, adding to the issue.

These kinds of companies aren't the ones being slowed by increased rates. They can just whether the storm and drain blood out of the rocks they have left on board.

>Private sector investment can continue to increase but at some point that too will hit a brick wall

At this point I'm betting the economy hits an objective recession before that brick wall happens. But I suppose we'll see.

316. ryandrake ◴[] No.41904660{9}[source]
Yes, I think that's it. There is a lot of horrible C++ code out there, especially on StackOverflow where "this compiled for me" sometimes ends up being the accepted answer. There are also a lot of ways to use C++ poorly/wrong without even knowing it.
317. johnnyanmac ◴[] No.41904684{7}[source]
ZIRP shifted the entire perspective, and tech's method of hoarding talent irrevocably changed how we iterpret business. if an increase to 5% can impact jobs in the 8 digits, reaching in nearly every sector and not just tech, 5% is definitely the new "high". For all the wrong reasons, perhaps. But the genie's out the bottle.

I don't know what the "new normal" is though. I suppose 2025 will be used to figure that out. I don't think 4.75% will be enough.

318. johnnyanmac ◴[] No.41904748{3}[source]
I don't think top stories is a good metric to measure "user feel". It s a harder snapshot, but taking data from the front page at a certain time of day everyday would be more of a reflection.

I only see 4 on my front page, but I dont think 8AM UTC-7 is the right timeslot to record "today's news".

319. n_ary ◴[] No.41904788{3}[source]
I believe, what we will see in new few years is the complete(or nearly) abolision of all human friendly customer support and everything will be ChatBot or voice-chat-bot based support to reduce cost of service.

We will also get some nice things, like more intelligent IDE at affordable cost, think CursorAi costs $20/month(240/year), while whole JetBrain's package costs only 25/month(290/year).

However, I am a bit worried about all these data center and AI and energy use/scaling. While consumers are being pushed to more and more efficient energy usage and energy prices are definitely high(to what I would expect with massive renewable energy production), large corps and such will continue scaling energy usage higher and higher.

Also, the AI fad will eventually spook out a lot of free knowledge sharing on the open and everything will get behind paywall, so random poor kid in some poor country will no longer have access to some nice tutorial or documentations online to learn cool stuff because in some countries, what we call a price of "morning coffee" is actually could be a day's earning of an adult but not for non-privileged people. Without ability to pay for AI services, no more access to knowledge. Search engines will eventually drown in slop, I mean even google now frequently gives me "no resoult found" page and I need to use ddg/brave/bing to fish out some results still.

320. causal ◴[] No.41904817{3}[source]
And more importantly, the GPU is the most replaceable part of the datacenter infrastructure. Power, buildings, cooling, security, cabling, internet backbone connectivity, etc. are the harder to swap pieces that require time to build out.
replies(1): >>41905553 #
321. Kon-Peki ◴[] No.41904853{3}[source]
Many data center tax breaks have carbon-free energy requirements (to varying degrees). If the pace of building data centers exceeds the ability to provide carbon-free energy, you may see a shift in the location of data centers, away from locations with good incentives and toward locations with the availability of energy regardless of its source.
322. gpderetta ◴[] No.41904880{9}[source]
> This is a similar situation to the view that the web would replace operating systems. All we'd need is a browser.

well, that's not a false statement. As much as I might dislike it, the raise of the web and web applications have made the OS themselves irrelevant for a significant number for tasks.

323. n_ary ◴[] No.41904922{5}[source]
It is all in the "what do I get in return right now?" concept. Reduce pollution is a promise to give back a safer Earth, save climate, make life less painful in future etc. which are all far away and the current generation in power will probably retire long before anything significant extinction level occurs.

AI on the other hand promises immeidate gain(lot of expensive job automation = cost cutting) as well as future return on investment. Like someone said, IT has returns realized quickly in few years, so that is more lucrative than /reducing pollution/.

Also, reducing pollution requires money spent(cost) and everyone is afraid of the cost if it does not promise at least x2-5 minimum return gain immediately(or in few years).

324. ogogmad ◴[] No.41904933{13}[source]
I don't think "OS" means anything definitive. It's not 1960. Nowadays, it's a thousand separate things stuck together.
325. Kurtz79 ◴[] No.41904957{6}[source]
I’m not sure how this decade is any different than the one that preceded it?

The current console generation is 4 years old and it’s at mid-cycle at best.

Games running on modern consoles are visually marginally better than those in the previous generation, and AAA titles are so expensive to develop that consoles will still be the target HW.

I really could not be bothered in updating my 3080…

Have I missed a new “Crysis”?

replies(1): >>41905345 #
326. ogogmad ◴[] No.41904964{11}[source]
What counts as an OS is subjective. The concept has always been a growing snowball.
327. johnnyanmac ◴[] No.41904991{7}[source]
>I don't expect that to actually stop the new energy projects unless the bubble popping has a massive economic impact.

I can see it going either way. It really depends on how the economy moves this decade, and I feel we're very much at an inflection point as of now. I won't even pretend to guess how 2025-6 will go at this point.

328. n_ary ◴[] No.41905046{3}[source]
I am eagerly waiting for my Dishwasher, Washing machine, Smart TV, electric stove and everything household electronics becomes AI powered. At least my phillips Hue does not yet have an AI and I am very happy about it.

Honorable mention to the IoT, SmartHome, ConnectedHome and other old hypes which we all forgot. May be, SmartHome will become an actual reality, if we can somehow get the LLMs to make autonomous decisions to keep a house maintained and comfy.

replies(1): >>41905233 #
329. snowfresco ◴[] No.41905126{3}[source]
Did you mean to say cutting rates?
330. n_ary ◴[] No.41905217{5}[source]
> I mean we have NVDA's CEO saying to not learn to code, and lot of non-techies quoting him these days.

Simply planting the seed of ignorance for generations to come. If people do not learn, they need someone/something to produce this, and who else is better than the gold mine(AI) to supply you these knowledge? Also, as long as cryptocurrency and AI boom goes, shovel sellers(i.e. NVDA) gains to profit, so it is in their best interest to run the sales pitch.

Also, once people think that all is gone, future is bleak, people will not learn and generate novel ideas and innovations, so all knowledge, research and innovation will slowly get locked away behind paywalls of people who can afford select few with the knowledge and access to wield the AI tech. Think of the internet of our gone years minus all the open and free knowledge, all the OSS, all the passionate people contributing and sharing. Now replace that with all the course sites where you must pay to get access to anything decent and replace the courses with AI.

At best, I see all these as feeding the fear and the laziness to kill off the expensive knowledge and the sharing culture, because if that is achieved, AI is the next de-facto product you need to build automation and digitalization.

331. Wheatman ◴[] No.41905233{4}[source]
Man, you would the new samaung AI fridge[1].

Then there is the AI bloatware [2] in your operating system that pinky promises it wont spy one you, even as it is becoming harder and harder to turn off.

[1]https://www.theverge.com/2023/12/27/24016939/samsung-2024-ai....

[2]Mostly rumors about copilot, please don't take this as gospel.

332. jjk7 ◴[] No.41905308{4}[source]
Nuclear exists.
replies(2): >>41905681 #>>41907017 #
333. n_ary ◴[] No.41905312[source]
I find LLMs to be much more friendly for very focused topics and mostly accurate. Anything generated, I can go ahead and check with corresponding source-code or official documentation.

In theory, I save immense amount of time daily talking to Claude/4o when I need to ask something quick, but previously had to search at least x4 different search engines and wade through too many SEO spams disappointing me.

Also, the summarizer while a meme at this point is immensely useful. I put anything interesting looking throughout the day into a db, then a cronjob in cloudflare runs and tries to fetch the text content from each link and generates a summary using 4o and then stores it.

Over the weekend, I scroll through the summary of each links saved, if anything looks decently interesting, I will go and check it out and do further research.

In fact, I actually learned about SolidJS from one random article posted in 4th page of HN with few votes and the summary gave enough info for me to go ahead and check SolidJs instead of having to read through the article ranting about ReactJS.

334. sixhobbits ◴[] No.41905320{3}[source]
I've been calling a crash for far too long so take with a pinch of salt BUT I think another four years of this is very unlikely.

1996 - Cisco was 23.4B or 0.3% of US GDP

2000 - Cisco peaked at 536B or 5.2% of US GDP

2020 - Nvidia was 144B or 0.7% of US GDP

2024 - Nvidia is 3.4T or 11.9% of US GDP

Numbers very rough and from different sources, but I'd be surprised if Nvidia doesn't pop within 1-2 years at most.

replies(3): >>41905411 #>>41905869 #>>41905954 #
335. johnnyanmac ◴[] No.41905345{7}[source]
>I’m not sure how this decade is any different than the one that preceded it?

2010, your game has to run on a PS3/Xbox 360. That didn't matter for PC games because all 3 had different architectures. So they were more or less parallel development.

2015, Playstation and Xbox both converged to X86. Porting between platforms is much easier and unified in many ways. But the big "mistake" (or benefit to your case) is that the PS4/XBO did not really try to "future proof" the way consoles usually did. A 2013 $4-500 PC build could run games about as well as a console. From here PCs would only grow.

2020. The PS5/XBX come out at the very end, so games are still more or less stuck with PS4/XBO as a "minium spec", but PCs have advanced a lot. SSDs became standard, we have tech like DLSS and Ray Traced rendering emerging from hardware, 60fps is being more normalized. RAM standards are starting to shift to 16GB over 8. But... Your minimum spec can't use these, so we still need to target 2013 tech. Despite the "pro versions" releasing, most games stlll ran adequately on the base models. Just not 60fps nor over 720p internal rendering.

Now come 2025. Playstation barely tapped into the base' power and is instead releasing a pro model already. Instead of optimizations, Sony wants to throw more hardware at the problem. The Xbox Series S should have in theory limited the minimum spec. But we have several high profile titles opting around that requirement.

The difference is happening in real time. There's more and more a trend to NOT optimize their tech as well (or at least, push their minimum spec to a point where the base models are onl lightly considered. A la Launch Cyberpunk), and all this will push up specs quite a bit in the PC market as a result. The console market always influences how PCs are targeted. And the console market in Gen 9 seems to be taking a lot less care for the low spec than Gen 8. That worries me from a "they'll support 10 year old hardware" POV.

>Have I missed a new “Crysis”?

If anything, Cyberpunk was the anti-crysis in many ways. Kind of showing how we're past the "current gen" back then, but also showing how they so haphazardly disregarded older platforms for lack of proper development time/care. Not becsuse the game was "ahead of its time". It's not like the PS5 performance was amazing to begin with. Just passable.

Specs are going up, but not for the right reasons IMO. I blame the 4k marketing for a good part of this as opposed to focusing on utilizing the huge jump in hardware for more game features, but that's for another rant.

336. n_ary ◴[] No.41905356{3}[source]
I read(or watched?) somewhere that, to build your social media reputation and popularity(i.e. follower count) organically, you must post daily, something, anything.

An interesting idea would be to automate a cronjob to ask LLM to generate a random motivational quote(more hallucination is more beneficial) or random status and then post it. Then automate this to generate different posts for X/Bsky/Mastodon/LinkedIn/Insta and you have auto generated presence. There is a saying that, if you let 1000 monkies type on a type writer, you will eventually have a hamlet or something.. forgot the saying, but with an auto generated presence, this could be valuable for a particular crowd.

replies(1): >>41905608 #
337. dragontamer ◴[] No.41905411{4}[source]
Rumor is that there just isn't enough power to turn on all those fancy AI accelerators or Datacenters.

There's a reason Microsoft just outright purchased the entire output of 3-mile island (a full sized nuclear power plant).

At some point, people will stop buying GPUs because we've simply run out of power.

My only hope is that we don't suffer some kind of ill effect (ex: double the cost of consumer electricity or something, or local municipalities going bankrupt due to rising energy costs). The AI boom has so much money in it we need to account for tail wags dog effects.

replies(1): >>41905600 #
338. leblancfg ◴[] No.41905422[source]
In this analogy, the AI hardware is not the public good – the AI models are.
339. falcor84 ◴[] No.41905443{5}[source]
Could you please give an example of what you wanted it to help you with, what you expected and what you got?
340. klabb3 ◴[] No.41905553{4}[source]
Right but that’s just digital infrastructure, and of course that will have positive side effects, no matter what role AI has in the future. Rail otoh is the infrastructure.
341. christianqchung ◴[] No.41905577{3}[source]
Is 3% supposed to be significant? Or did you mean 4 Turbo and 4o mini?
replies(1): >>41906154 #
342. shikon7 ◴[] No.41905600{5}[source]
In a working market, we won't run out of power, but power becomes so expensive that it's no longer viable to use most of the GPUs. The boom shifts to power generation instead, and we will have a similar article "The power investment boom", where people will debate that we will stop building power plants because we've simply run out of GPUs to use that power.
replies(1): >>41905650 #
343. JohnMakin ◴[] No.41905608{4}[source]
People are already doing this in a much lazier way en masse on instagram - they'll steal someone's content, lots of times it's shock/violence content that draws in eyeballs, and will post it. Since they need a description for IG's algorithms, they just paste a random response to a LLM prompt into the reel's description. So, you'll be presented a video of the beirut explosion and the caption will be "No problem! Here's some information on the Mercedes blah blah blah."

Once they reach critical mass, they inevitably start posting porn ads. Weird, weird dynamic we're in now.

344. SoftTalker ◴[] No.41905630[source]
Except railroads, manufacturing plants, telecom fiber and those prior build-outs were for infrastructures that have useful lifetimes measured in decades.

Computing infrastrucuture that's even one decade old is essentially obsolete. Even desktop PCs are often life-cycled in 5 years, servers often the same.

If it takes AI a decade to find its way, most of today's investment won't be useful at that point.

replies(1): >>41905694 #
345. SoftTalker ◴[] No.41905650{6}[source]
Fortunately we're also electrifying transportation so there will be no shortage of demand for electrical power generation.
346. _heimdall ◴[] No.41905681{5}[source]
Sure, nuclear may fit the bill. That avoids needing a new advancement if we're happy enough with the environmental impact of nuclear generation, but it doesn't avoid all the external costs beyond just the nuclear reaction.

Reactors themselves take a large amount of resources, some rare, to build. Infrastructure is another huge resource suck, all that copper has to come from somewhere. Nuclear has the nice benefit of being on-demand, so it does at least dodge resources needed for energy storage.

347. SoftTalker ◴[] No.41905687{5}[source]
Companies are going to have to do a lot less gatekeeping and siloing of data for this to really work. The companies that are totally transparent even internally are few and far between in my experience.
348. fizx ◴[] No.41905694{3}[source]
Don't think of an H100. Think of the factories, the tooling, the datacenters and the power supply needed to light one up.
349. danans ◴[] No.41905710{5}[source]
Pollution (especially the greenhouse gas type) is like the proverbial frog slowly boiling in the pot of water. Eventually we feel the effects, often in huge ways but disintermediated by time.

Whereas exploiting a new technology like AI for potential profit is like a massive hit of sugar/caffeine/drug in that we feel/act on ASAP.

350. TeaBrain ◴[] No.41905766{4}[source]
>There were hundreds of failed automotive companies

What companies are you referring to?

351. TeaBrain ◴[] No.41905774{4}[source]
That's one automobile company. The parent mentioned "hundreds".
352. jbs789 ◴[] No.41905809[source]
I think this is the reasoned answer.

It’s interesting to observe who is making the counterpoint - it’s often very vocal fundraisers.

Of course you can argue they are raising because they believe, and I don’t (necessarily) doubt that in all cases.

353. margalabargala ◴[] No.41905815{5}[source]
Exactly. SaaS does not enter into it.

People will locally run the open models which are freely released, just like they do today with Llama and Whisper.

Most of the AI SaaS companies won't be around to have anything to say about it, because they will be casualties of the bust that will follow the boom. There will be a few survivors with really excellent models, and some people will pay for those, while many others simply use the good-enough freely available ones.

354. syndicatedjelly ◴[] No.41905869{4}[source]
Comparing a company’s market cap to the US GDP makes no sense to me. The former is the product of shares and stock price. The latter is total financial output of a country. What intuition is that supposed to provide?
replies(1): >>41906259 #
355. pirate787 ◴[] No.41905954{4}[source]
This comparison is silly. First of all, Cisco's scale was assembled through acquisitions, and hardware is a commodity business. Nvidia has largely grown organically and has CUDA software as a unique differentiator.

More importantly, Cisco's margins and PE were much higher than Nvidia's today.

You should use actual financial measures and not GDP national accounts which have zero bearing on business valuation.

replies(1): >>41906134 #
356. warkdarrior ◴[] No.41905978{6}[source]
> The industry's approach is fundamentally wrong, making 20 solutions to a problem with plenty of room to standardize a proper approach [...]. But an LLM isn't going to fix that cultural issue and will suffer from it.

LLM-powered development may push the industry towards standardization. "Oh, CoPilot cannot generate proper code for your SDK/API/service? Sorry, all my developers use CoPilot, so we will not integrate with your SDK/API/service until you provide better, CoPilot-friendly docs and examples."

357. mvdtnz ◴[] No.41906085{7}[source]
Good code isn't just easy to read, it's easy to change. The code in this app is brittle, tightly coupled and likely to break if the app is changed.
replies(1): >>41908383 #
358. TacticalCoder ◴[] No.41906134{5}[source]
I don't think GP's comparison is as silly as you think. People thinking about "money" take many different numbers, from a shitload of source, into account.

There's a relation between P/E and future actual revenues of a company.

Imagine that a similar comparison would imply that it's projected that in a few years NVidia's revenues shall represent 10% of the US's GDP: do we really believe that's going to happen?

The Mag 7 + Broadcom have a market cap that is now 60% of the US's GDP. I know you think it's silly but... Doesn't that say something about the expect revenues of these companies in a few years?

Do we really think the Mag 7 + Broadcom (just an example) are really to represent the % of the actual US's GDP that that implies?

Just to be clear: I'm not saying it implies the percentage of the US GDP of these 8 companies alone is going to be 60% but there is a relation between the P/E of a company and its expected revenues. And revenues of companies do participate in the GDP computation.

I don't think it's as silly as several here think.

I also don't think GP should be downvoted: if we disagree, we can discuss it.

359. zone411 ◴[] No.41906154{4}[source]
It is significant because of the other chart that shows MUCH lower non-response rates for GPT-4o.
360. jordanb ◴[] No.41906189[source]
The only asset from the telcom bubble that was still valuable was fiber in the ground. All the endpoints were obsolete and had to be replaced within a few years. The fiber could be reused and installing it was expensive, so that was the main asset.

What asset from the AI bubble will still be valuable 5 years later? Probably not any warehouses full of 5 year old GPUs. Maybe nuclear power plants?

replies(1): >>41911471 #
361. rendang ◴[] No.41906216{5}[source]
Interesting. I still find it to be a net positive, but it is amusing when I ask it about a project and the source cited is a Slack thread I wrote 2 days prior
362. rendang ◴[] No.41906259{5}[source]
Comparing to total household wealth would be better (about $140T now, about $40T in 2000)
363. danielmarkbruce ◴[] No.41906460{9}[source]
Yup, it's pretty good.

It's just not a huge winner. Many industries don't work that way, there are no "huge winners" even if there are some companies that are huge. Oil & gas doesn't really have "huge winners". The huge companies are a result of huge amounts of capital being put to work.

364. danielmarkbruce ◴[] No.41906474{9}[source]
Largest employer is a strange way to describe a huge winner.
365. snapcaster ◴[] No.41906566{6}[source]
You're saying "yours" and "you" but from what I can tell you're describing completely different sets of people as some kind of hypocritical single entity
366. barryrandall ◴[] No.41907017{5}[source]
It does, but the general public does not seem to believe anyone can operate it safely enough to allow it in their communities. That position may or may not be supported by facts, but that only matters in countries where politicians don't answer to the electorate.
367. pilgrim0 ◴[] No.41907024{4}[source]
I think because settling for less is just not an option, progress at large would be hindered with a pressure for using less energy, even though it’s the right thing, arguably. Hence demand for more energy itself propels progress and innovation.
replies(1): >>41907233 #
368. tivert ◴[] No.41907055{4}[source]
> Who knows what ideas will spring forth when there are all these AI-capable data-centers sitting out there on the cheap.

You still have to pay for power to run them. A lot of power. It won't be that cheap.

369. tivert ◴[] No.41907077{5}[source]
> It is so bizarre that reducing pollution was not a sufficient driver to build more nuclear power, but training AI models is.

It makes more sense when you understand "training AI models" as "greedily pumping up a bubble."

370. cma ◴[] No.41907223{4}[source]
It has been a game changer for code stuff, surfacing libraries and APIs I didn't know about. And I can verify them with the documentation.

And I don't think that's just for assisting experts: it would be extremely helpful to beginners too as long as they have the mindset that it can be wrong.

371. _heimdall ◴[] No.41907233{5}[source]
If we're meant to believe climate predictions coming out of the UN and similar, potential innovation spurred by an increase in energy production would slam face first into the wall of climate catastrophe even faster.

I'm not even saying I put much faith behind those predictions, but in the context of contradicting climate concerns with tech "innovation" requiring such massive amounts of energy it seems pertinent. We can't have it both ways, either most agree that the climate concerns are baseless or we accept that we collectively would be choosing to destroy the planet faster in the name of progress and innovation.

"Progress" itself is such an interesting term. There's no directionality to it, the only meaning is that we aren't standing still. There's nothing baked into progress that would stop us from progressing right off a cliff, I suppose unless we're already off the cliff and progress could change that.

372. dwaltrip ◴[] No.41907288[source]
Heard of the solar boom? It’s growing exponentially.

Also, it’s not like there is one person in charge of the whole world deciding what happens.

replies(1): >>41909905 #
373. danenania ◴[] No.41907462{7}[source]
If the code can be generated correctly, is it controversial to say that generating it will be more efficient than reading through documentation and/or learning how to use a new library?

If you grant that, the next question is how high the accuracy has to be before it's quicker than doing the research and writing the code yourself. If it's 100%, then it's clearly better, since doing the research and implementation oneself generally takes an hour or so in the best scenario (this can expand to multiple hours or days depending on the task). If it's 99%, it's still probably (much) better, since it will be faster to fix the minor issues than to implement from scratch. If it's 90%, 80%, 70% it becomes a more interesting question.

replies(1): >>41907605 #
374. danenania ◴[] No.41907553{6}[source]
> If, on the other hand, it’s knowledge that isn’t well (or at all) represented, and instead requires experience or experimentation with the relevant system, LLMs don’t do very well. I regularly fail with applying LLMs to tasks that turn out to require such “hidden” knowledge.

It's true enough that there are many tasks like this. But there are also many relatively arcane APIs/protocols/domains that LLMs do a surprisingly good job with. I tend to think it's worth checking which bucket a task falls into before spending hours or days hammering something out myself.

I think many devs are underestimating how arcane the knowledge needs to be before an LLM will be hopeless at a knowledge-based task. There's a lot of code on the internet.

375. Terr_ ◴[] No.41907605{8}[source]
Compare to: "If you can copy-paste from a Stack-overflow answer, is it controversial to say that copy-pasting is more efficient than reading through documentation and/or learning how to use a new library?"
replies(1): >>41907706 #
376. danenania ◴[] No.41907706{9}[source]
If I understand the code and it does exactly what I need, should I type the whole thing out rather than copy-pasting? Sounds like a waste of time to me.
377. cma ◴[] No.41908177{4}[source]
And there are many other things it can do, throw your code into it and ask it to look for bugs/oversights/potential optimizations. Then use your reasoning ability to see if it is right on what it gives back.
378. Mabusto ◴[] No.41908344[source]
I think the goal of minimizing hallucinations needs to be adjusted. When a human "lies", there is a familiarity to it - "I think the restaurant is here." "Wasn't he in Inception?", like humans are good at conveying which information they're certain of and what they're uncertain of, either with vocal tone, body language or signals in the writing style. I've been trying to use Gemini to just ask simple questions and it's hallucinations really put me off. It will confidently tell me lies and now my lizard brain just sees it as unreliable and I'm less likely to ask it things, only because it's not at all able to indicate which information it's certain of. We're never going to get rid of hallucinations because of the probabilistic nature in which LLMs work, but we can get better at adjusting how these are presented to humans.
379. FeepingCreature ◴[] No.41908383{8}[source]
Eh. Honestly, so far Sonnet hasn't had any trouble with it. The thing is that every time it changes anything, it rewrites every line of code anyways just because I ask it "please give me the complete changed file(s) for easy copypasting."

The effort tradeoff is different for AIs than humans. Easy-to-understand-locally is more important than cheap-to-change, because it can do "read and check every line in the project" for like 20 cents. Making AIs code like humans is not playing to their strengths.

I don't think it's that bad anyways.

380. tim333 ◴[] No.41908463{7}[source]
True. Market/economic forecasting is quite unreliable, partly because you're trying to predict human behaviour which is changeable.
381. shadowmanifold ◴[] No.41908970{4}[source]
It is rather comical.

Everyone knows that panning for gold is a fools game so we have a gold pan/shovel bubble.

It is like having a massive lumber bubble and calling it a real estate bubble because someday we might actually build those houses.

382. _heimdall ◴[] No.41909905{3}[source]
> Heard of the solar boom? It’s growing exponentially

Solar is a whole other can of worms. I wouldn't expect it to be too useful for LLMs demanding such high energy inputs. Solar is only produced for around 5 hours per day depending on latitude. For every megawatt of energy needed 24/7 for a GPU farm you would need around 5 megawatts of solar and 20 megawatts of storage (ignoring losses along the way due to transmission, heat, and conversions).

> Also, it’s not like there is one person in charge of the whole world deciding what happens.

Totally agree and I didn't mean to imply that. The list is surprisingly small though. More importantly, many of those in charge of the main LLM companies have themselves spoken about how important it is to reduce our environmental impact, going so far as setting very specific targets for their companies to reduce or eliminate their net impact. Those goals all but disappeared after they pivoted to LLM products.

383. marcus_holmes ◴[] No.41910102{12}[source]
I think what you mean is "Desktop" not "OS". You're just replacing all the windows, menus and buttons with a chat interface.
384. RF_Savage ◴[] No.41911465{7}[source]
Ain't that a direct result of them not investing in their infra for decades and all that technical debt catching up to them?
385. RF_Savage ◴[] No.41911471{3}[source]
I doubt they will manage to do a significant nuclear buildout in mere five years. But future renevables will potentially benefit from the stronger grid.