Most active commenters
  • haiku2077(7)
  • (7)
  • godelski(6)
  • XenophileJKO(6)
  • oblio(6)
  • guappa(5)
  • Dylan16807(4)
  • ascorbic(4)
  • rafaelmn(4)
  • throwawayoldie(4)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1611 points SwoopsFromAbove | 181 comments | | HN request time: 1.472s | source | bottom
1. delichon ◴[] No.44567913[source]
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
replies(17): >>44567949 #>>44567951 #>>44567961 #>>44567992 #>>44568002 #>>44568006 #>>44568029 #>>44568031 #>>44568040 #>>44568057 #>>44568062 #>>44568090 #>>44568323 #>>44568376 #>>44568565 #>>44569900 #>>44574150 #
2. SV_BubbleTime ◴[] No.44567949[source]
> It's coming faster and harder than any tech in history.

True; but how is that not expected?

We have more and more efficient communication than any point in history, this is a software solution with a very low bar to the building blocks and theory.

Software should be expected to move faster and faster.

I’m not sure who is wishing it away. No one wanted to wish away search engines, or dictionaries or advice from people who repeat things they read.

It’s panic top to bottom on this topic. Surely there are some adults around that can just look at a new thing for what it is now and not what it could turn into in a fantasy future?

3. NBJack ◴[] No.44567951[source]
Ironically, this is exactly the technique for arguing that the blog mentions.

Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?

replies(12): >>44567966 #>>44567973 #>>44567981 #>>44567984 #>>44567993 #>>44568067 #>>44568093 #>>44568163 #>>44568336 #>>44568442 #>>44568656 #>>44569295 #
4. godelski ◴[] No.44567961[source]
If you told someone in 1950 that smartphones would dominate they wouldn't have a hard time believing you. Hell, they'd add it to sci-fi books and movies. That's because the utility of it is so clear.

But if you told them about social media, I think the story would be different. Some would think it would be great, some would see it as dystopian, but neither would be right.

We don't have to imagine, though. All three of these things have captured people's imaginations since before the 50's. It's just... AI has always been closer to imagined concepts of social media more than it has been to highly advanced communication devices.

replies(3): >>44568033 #>>44568140 #>>44568166 #
5. HPsquared ◴[] No.44567966[source]
1. The Segway had very low market penetration but a lot of PR. LLMs and diffusion models have had massive organic growth.

2. Segways were just ahead of their time: portable lithium-ion powered urban personal transportation is getting pretty big now.

replies(3): >>44568065 #>>44568101 #>>44568795 #
6. godelski ◴[] No.44567973[source]
I think about the Segway a lot. It's a good example. Man, what a wild time. Everyone was so excited and it was held in mystery for so long. People had tried it in secret and raved about it on television. Then... they showed it... and... well...

I got to try one once. It was very underwhelming...

replies(2): >>44568167 #>>44568210 #
7. zulban ◴[] No.44567981[source]
> Remember ...

No, I don't remember it like that. Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?

You don't. I love the argument ad absurdum more than most but you've taken it a teensy bit too far.

replies(2): >>44568745 #>>44568869 #
8. antonvs ◴[] No.44567984[source]
That was marketing done before the nature of the device was known. The situation with LLMs is very different, really not at all comparable.
9. troupo ◴[] No.44567992[source]
Literally from the article

--- start quote ---

Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

--- end quote ---

Mass adoption is not inevitable. Everyone will drop this "faster harder" tech like a hot potato when (not if) it fails to result in meaningful profits.

Oh, there will be forced mass adoption alright. Have you tried Gemini? Have you? Gemini? Have you tried it? HAVE YOU? HAVE YOU TRIED GEMINI?!!!

replies(2): >>44568034 #>>44568274 #
10. delichon ◴[] No.44567993[source]
I remember the Segway hype well. And I think AI is to Segway as nuke is to wet firecracker.
replies(1): >>44568354 #
11. mekael ◴[] No.44568002[source]
We might not be able to wish it away, but we can, as a society, decide to not utilize it and even actively eradicate it. I honestly believe that llm's/ai are a net negative to society and need to be ripped out root and stem. If tomorrow all of us decided to do that, nothing bad would happen, and we'd all be ok.
12. darepublic ◴[] No.44568006[source]
I still can't make some of the things in my imagination so I'm going to keep coding, using whatever is at my disposal including LLMs if I must.
13. rafaelmn ◴[] No.44568029[source]
If you claimed that AI was inevitable in the 80s and invested, or claimed people would be inevitably moving to VR 10 years ago - you would be shit out of luck. Zuck is still burning billions on it with nothing to show for it and a bad outlook. Even Apple tried it and hilariously missed the demand estimate. The only potential bailout for this tech is AR, but thats still years away from consumer market and widespread adoption, and probably will have very little to do with shit that is getting built for VR, because its a completely different experience. But I am sure some of the tech/UX will carry over.

Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?

Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.

replies(6): >>44568330 #>>44568622 #>>44568907 #>>44574172 #>>44580115 #>>44580141 #
14. p0w3n3d ◴[] No.44568031[source]
Back in 1950s nuclear tech was seen as inevitable. Many people had even bought plates made from uranium glass. They still glow somewhere in my parents' cabinet or maybe I broke them
replies(2): >>44569188 #>>44574020 #
15. energy123 ◴[] No.44568033[source]
> But if you told them about social media, I think the story would be different.

It would be utopian, like how people thought of social media in the oughts. It's a common pattern through human history. People lack the imagination to think of unintended side effects. Nuclear physics leading to nuclear weapons. Trains leading to more efficient genocide. Media distribution and printing press leading to new types of propaganda and autocracies. Oil leading to global warming. IT leading to easy surveillance. Communism leading to famine.

Some of that utopianism is wilful, created by the people with a self-interested motive in seeing that narrative become dominant. But most of it is just a lack of imagination. Policymakers taking the path of local least resistance, seeking to locally (in a temporal sense) appease, avoiding high-risk high-reward policy gambits that do not advance their local political ambitions. People being satisfied with easy just-so stories rather than humility and a recognition of the complexity and inherent uncertainty of reality.

AI, and especially ASI, will probably be the same. The material upsides are obvious. The downsides harder to imagine and more speculative. Most likely, society will be presented with a fait accompli at a future date, where once the downsides are crystallized and real, it's already too late.

replies(2): >>44568162 #>>44568234 #
16. _carbyau_ ◴[] No.44568034[source]
Or Copilot.

It's actions like this that are making me think seriously about converting my gaming PC to Linux - where I don't have to eat the corporate overlord shit.

replies(1): >>44571879 #
17. afavour ◴[] No.44568040[source]
Feels somewhat like a self fulfilling prophecy though. Big tech companies jam “AI” in every product crevice they can find… “see how widely it’s used? It’s inevitable!”

I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.

replies(2): >>44568087 #>>44570793 #
18. mattigames ◴[] No.44568057[source]
For the way you speak you seem to be fairly certain that they still gonna need you as it's user, that they aren't going to find a better monetization than selling it to people like you (or even small companies in general), I wouldn't be so sure, remember we are talking about the machine that is growing with the aim of being able to do do every single white-collar job.
replies(1): >>44568077 #
19. mbgerring ◴[] No.44568062[source]
I’ve tried to use AI for “real work” a handful of times and have mostly come away disappointed, unimpressed, or annoyed that I wasted my time.

Given the absolutely insane hard resource requirements for these systems that are kind of useful, sometimes, in very limited contexts, I don’t believe its adoption is inevitable.

Maybe one of the reasons for that is that I work in the energy industry and broadly in climate tech. I am painfully aware of how much we need to do with energy in the coming decades to avoid civilizational collapse, and how difficult all of that will be, without adding all of these AI data centers into the mix. Without several breakthroughs in one or more hard engineering disciplines, the mass adoption of AI is not currently physically possible.

replies(1): >>44568176 #
20. jdiff ◴[] No.44568065{3}[source]
Massive, organic, and unprofitable. And as soon as it's no longer free, as soon as the VC funding can no longer sustain it, an enormous fraction of usage and users will all evaporate.

The Segway always had a high barrier to entry. Currently for ChatGPT you don't even need an account, and everyone already has a Google account.

replies(2): >>44568094 #>>44568113 #
21. johnfn ◴[] No.44568067[source]
Oh yeah I totally remember Segway hitting a 300B valuation after a couple of years.
22. mekael ◴[] No.44568077[source]
And with everyone constantly touting robotics as the next next frontier, every blue collar job as well.
23. XenophileJKO ◴[] No.44568087[source]
We haven't even barely extracted the value from the current generation of SOTA models. I would estimate less then 0.1% of the possible economic benefit is currently extracted, even if the tech effectively stood still.

That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.

replies(1): >>44568159 #
24. seydor ◴[] No.44568090[source]
they said the same about VR glasses, about cryptocurrency ...
replies(1): >>44572623 #
25. ◴[] No.44568093[source]
26. lumost ◴[] No.44568094{4}[source]
The free tiers might be tough to sustain, but it’s hard to imagine that they are that problematic for OpenAI et al. GPUs will become cheaper, and smaller/faster models will reach the same level of capability.
replies(2): >>44572152 #>>44573321 #
27. DonHopkins ◴[] No.44568101{3}[source]
That's funny, I remember seeing "IT" penetrate Mr. Garrison.

https://www.youtube.com/watch?v=SK362RLHXGY

Hey, it still beats what you go through at the airports.

28. etaioinshrdlu ◴[] No.44568113{4}[source]
This is wrong because LLMs are cheap enough to run profitably on ads alone (search style or banner ad style) for over 2 years now. And they are getting cheaper over time for the same quality.

It is even cheaper to serve an LLM answer than call a web search API!

Zero chance all the users evaporate unless something much better comes along, or the tech is banned, etc...

replies(1): >>44568161 #
29. inopinatus ◴[] No.44568140[source]
the idea that we could have a stilted and awkward conversation with an overconfident robot would not have surprised a typical mid-century science fiction consumer
replies(1): >>44568243 #
30. WD-42 ◴[] No.44568159{3}[source]
I keep hearing this over and over. Some llm toiling away coding personal side projects, and utilities. Source code never shared, usually because it’s “too specific to my needs”. This is the code version of slop.

When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.

replies(3): >>44568209 #>>44568248 #>>44570410 #
31. scubbo ◴[] No.44568161{5}[source]
> LLMs are cheap enough to run profitably on ads alone

> It is even cheaper to serve an LLM answer than call a web search API

These, uhhhh, these are some rather extraordinary claims. Got some extraordinary evidence to go along with them?

replies(2): >>44568184 #>>44568437 #
32. cwnyth ◴[] No.44568162{3}[source]
All of this is a pretty ignorant take on history. You don't think those who worked on the Manhattan Project knew the deadly potential of the atom bomb? And Communism didn't lead to famine - Soviet and Maoist policies did. Communism was immaterial to that. And it has nothing to do with utopianism. Trains were utopian? Really? It's just that new technology can be used for good things or bad things, and this goes back to when Grog invented the club. It's has zero bearing on this discussion.

Your ending sentence is certainly correct: we aren't imagining the effects of AI enough, but all of your examples are not only unconvincing, they're easy ways to ignore what downsides of AI there might be. People can easily point to how trains have done a net positive in the world and walk away from your argument thinking AI is going to do the same.

replies(2): >>44568211 #>>44568337 #
33. haiku2077 ◴[] No.44568163[source]
> Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?

Counterpoint: That's how I feel about ebikes and escooters right now.

Over the weekend, I needed to go to my parent's place for brunch. I put on my motorcycle gear, grabbed my motorcycle keys, went to my garage, and as I was about to pull out my BMW motorcycle (MSRP ~$17k), looked at my Ariel ebike (MSRP ~$2k) and decided to ride it instead. For short trips they're a game changing mode of transport.

replies(1): >>44568359 #
34. tines ◴[] No.44568166[source]
> Some would think it would be great, some would see it as dystopian, but neither would be right.

No, the people saying it’s dystopian would be correct by objective measure. Bombs are nothing next to Facebook and TikTok.

replies(2): >>44568268 #>>44568822 #
35. anovikov ◴[] No.44568167{3}[source]
Problem with Segway was that it was made in USA and thus was absurdly, laughably expensive, it cost the same as a good used car and top versions, as a basic new car. Once a small bunch of rich people all bought one, it was over. China simply wasn't in position at a time yet to copycat and mass-produce it cheaply, and hype cycles usually don't repeat so by the time it could, it was too late. If it was invented 10 years later we'd all ride $1000-$2000 Segways today.
replies(1): >>44568206 #
36. dheera ◴[] No.44568176[source]
That's how people probably felt about the first cars, the first laptops, the first <anything>.

People like you grumbled when their early car broke down in the middle of a dirt road in the boondocks and they had to eat grass and shoot rabbits until the next help arrived. "My horse wouldn't have broken down", they said.

Technologies mature over time.

replies(4): >>44568230 #>>44568465 #>>44568803 #>>44569817 #
37. haiku2077 ◴[] No.44568184{6}[source]
https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch..., also note the "objections" section

Anecdotally thanks to hardware advancements the locally-run AI software I develop has gotten more than 100x faster in the past year thanks to Moore's law

replies(2): >>44568256 #>>44568289 #
38. haiku2077 ◴[] No.44568206{4}[source]
> If it was invented 10 years later we'd all ride $1000-$2000 Segways today.

I chat with the guy who works nights at my local convenience store about our $1000-2000 e-scooters. We both use them more than we use our cars.

39. XenophileJKO ◴[] No.44568209{4}[source]
:| I'm an engineer of 30+ years. I think I know good and bad quality. You can't "vibe code" good quality, you have to review the code. However it is like having a team of 20 Junior Engineers working. If you know how to steer a group of engineers, then you can create high quality code by reviewing the code. But sure, bury your head in the sand and don't learn how to use this incredibly powerful tool. I don't care. I just find it surprising that some people have such a myopic perspective.

It is really the same kind of thing.. but the model is "smarter" then a junior engineer usually. You can say something like "hmm.. I think an event bus makes sense here" Then the LLM will do it in 5 seconds. The problem is that there are certain behavioral biases that require active reminding (though I think some MCP integration work might resolve most of them, but this is just based on the current Claude Code and Opus/Sonnet 4 models)

replies(4): >>44568238 #>>44568420 #>>44568553 #>>44574632 #
40. positron26 ◴[] No.44568210{3}[source]
I'm going to hold onto the Segway as an actual instance of hype the next time someone calls LLMs "hype".

LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.

Calling LLMs "hype" is an example of cope, judging facts based on what is hoped to be true even in the face of overwhelming evidence or even self-evident imminence to the contrary.

I know people calling "hype" are motivated by something. Maybe it is a desire to contain the inevitable harm of any huge rollout or to slow down the disruption. Maybe it's simply the egotistical instinct to be contrarian and harvest karma while we can still feign to be debating shadows on the wall. I just want to be up front. It's not hype. Few people calling "hype" can believe that this is hype and anyone who does believes it simply isn't credible. That won't stop people from jockeying to protect their interests, hoping that some intersubjective truth we manufacture together will work in their favor, but my lord is the "hype" bandwagon being dishonest these days.

replies(3): >>44568661 #>>44573203 #>>44574702 #
41. energy123 ◴[] No.44568211{4}[source]
> You don't think those who worked on the Manhattan Project knew the deadly potential of the atom bomb?

They did. I am talking about the physicists who preceded these particular physicists.

> And Communism didn't lead to famine - Soviet and Maoist policies did. Communism was immaterial to that.

The particular brand of agrarian communism and agricultural collectivization resulting from this subtype of communism did directly cause famine. The utopian revolutionaries did not predict this outcome before hand.

> People can easily point to how trains have done a net positive in the world and walk away from your argument thinking AI is going to do the same.

But that is one plausible outcome. Overall a net good, but with significant unintended consequences and high potential for misuse that is not easily predictable to people working on the technology today.

42. mbgerring ◴[] No.44568230{3}[source]
We actually don’t know whether or not meaningful performance gains with LLMs are available using current approaches, and we do know that there are hard physical limits to electricity generation. Yes, technologies mature over time. The history of most AI approaches since the 60s is a big breakthrough followed by diminishing returns. I have not seen any credible argument that this time is different.
43. godelski ◴[] No.44568234{3}[source]

  > It would be utopian
People wrote about this. We know the answer! I stated this, so I'm caught off guard as it seems you are responding to someone else, but at the same time, to me.

London Times, The Naked Sun, Neuromancer, The Sockwave Rider, Stand on Zanzibar, or The Machine Stops. These all have varying degrees of ideas that would remind you of social media today.

Are they all utopian?

You're right, the downsides are harder to imagine. Yet, it has been done. I'd also argue that it is the duty of any engineer. It is so easy to make weapons of destruction while getting caught up in the potential benefits and the interesting problems being solved. Evil is not solely created by evil. Often, evil is created by good men trying to do good. If only doing good was easy, then we'd have so much more good. But we're human. We chose to be engineers, to take on these problems. To take on challenging tasks. We like to gloat about how smart we are? (We all do, let's admit it. I'm not going to deny it) But I'll just leave with a quote: "We choose to go to the Moon in this decade and do the other things not because they are easy, but because they are hard"

44. twelve40 ◴[] No.44568238{5}[source]
> it is like having a team of 20 Junior Engineers

lol sounds like a true nightmare. Code is a liability. Faster junior coding = more crap code = more liability.

replies(1): >>44568536 #
45. godelski ◴[] No.44568243{3}[source]
Honestly, I think they'd be surprised that it wasn't better. I mean... who ever heard of that Asimov guy?
46. enjo ◴[] No.44568248{4}[source]
100% agree. I have so much trouble squaring my experience with the hype and the grandparent post here.

The types of tasks I have been putting Claude Code to work on are iterative changes on a medium complexity code base. I have an extensive Claude.md. I write detailed PRDs. I use planning mode to plan the implementation with Claude. After a bunch of iteration I end up with nicely detailed checklists that take quite a lot of time to develop but look like a decent plan for implementation. I turn Claude (Opus) loose and religiously babysit it as it goes through the implementation.

Less than 50% of the time I end up with something that compiles. Despite spending hundreds of thousands of tokens while Claude desperately throws stuff against the wall trying to make it work.

I end up spending as much time as it would have taken just to write it to get through this process AND then do a meticulous line by line review where I typically find quite a lot to fix. I really can't form a strong opinion about the efficiency of this whole thing. It's possible this is faster. It's possible that it's not. It's definitely very high variance.

I am getting better at pattern matching on things AI will do competently. But it's not a long list and it's not much of the work I actually do in a day. Really the biggest benefit is that I end up with better documentation because I generated all of that to try and make the whole thing actually work in the first place.

Either I am doing something wrong, the work that AI excels at looks very different than mine, or people are just lying.

replies(1): >>44568331 #
47. oblio ◴[] No.44568256{7}[source]
What hardware advancement? There's hardly any these days... Especially not for this kind of computing.
replies(2): >>44568338 #>>44568593 #
48. godelski ◴[] No.44568268{3}[source]
I don't blame people for being optimistic. We should never do that. But we should be aware how optimism, as well as pessimism, can so easily blind us. There's a quote a like by Feynman

  The first principle is that you must not fool yourself and you are the easiest person to fool.
There is something of a balance. Certainly, Social Media does some good and has the potential to do more. But also, it certainly has been abused. Maybe so much that it become difficult to imagine it ever being good.

We need optimism. Optimism gives us hope. It gives us drive.

But we also need pessimism. It lets us be critical. It gives us direction. It tells us what we need to fix.

But unfettered optimism is like going on a drive with no direction. Soon you'll fall off a cliff. And unfettered pessimism won't even get you out the door. What's the point?

You need both if you want to see and explore the world. To build a better future. To live a better life. To... to... just be human. With either extreme, you're just a shell.

49. boogieknite ◴[] No.44568274[source]
what i like about your last jokey comment is that discussions about ai, both good and bad, are incredibly boring

went to some tech meetups earlier this year and when the topic came up, one of the organizers politely commented to me that pretty much everything said about ai has been said. the only discussions worth having are introductions to the tools then leaving an individual to decide for themselves whether or not its useful to them. those introductions should be brief and discussions of the applications are boring

back in the bar scene days discussing work, religion, and politics were social faux pas. im sensing ai is on that list now

replies(1): >>44568487 #
50. ◴[] No.44568289{7}[source]
51. bgwalter ◴[] No.44568323[source]
Smartphones are different. People really wanted them since the relatively primitive Nokia Communicator.

"AI" was introduced as an impressive parlor trick. People like to play around, so it quickly got popular. Then companies started force-feeding it by integrating it into every existing product, including the gamification and bureaucratization of programming.

Most people except for the gamers and plagiarists don't want it. Games and programming fads can fall out of fashion very fast.

replies(2): >>44568443 #>>44568632 #
52. gbalduzzi ◴[] No.44568330[source]
None of the "failed" innovations you cited were even near the adoption rate of current LLMs.

As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.

VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.

That's a big difference

replies(5): >>44568501 #>>44568566 #>>44568888 #>>44570634 #>>44573465 #
53. XenophileJKO ◴[] No.44568331{5}[source]
1. What are your typical failures? 2. What language and domain are you working in?

I'm kind of surprised, certainly there is a locality bias and an action bias to the model by default, which can partially be mitigated by claude.md instructions (though it isn't great at following if you have too much instruction there). This can lead to hacky solutions without additional meta-process.

I've been experimenting with different ways for the model to get the necessary context to understand where the code should live and the patterns it should use.

I have used planning mode only a little (I was just out of the country for 3 weeks and not coding, so it has only just become available before I left, but it wasn't a requirement in my past experience)

The only BIG thing I want from Claude Code right now is a "Yes, and.." for accepting code edits where I can steer the next step while accepting the code.

54. ako ◴[] No.44568336[source]
Trend vs single initiative. One company failed but overall personal electric transportation is booming is cities. AI is the future, but along the way many individual companies doing AI will fail. Cars are here to stay, but many individual car companies have and will fail, same for phones, everyone has a mobile phone, but nokia still failed…
replies(1): >>44568588 #
55. godelski ◴[] No.44568337{4}[source]

  > You don't think those who worked on the Manhattan Project knew the deadly potential of the atom bomb?
I think you have missed an important part of history. That era changed physics. That era changed physicists. It was a critical turning point. Many of those people got lost in the work. The thrill of discovery, combined with the fear of war and an enemy as big as imagination.

Many of those who built the bomb became some of the strongest opponents. They were blinded by their passion. They were blinded by their fears. But once the bomb was built, once the bomb was dropped, it was hard to stay blind.

I say that this changed physicists, because you can't get a university degree without learning about this. They talk about the skeletons in the closet. They talk about how easy it is to fool yourself. Maybe it was the war and the power of the atom. Maybe it was the complexity of "new physics". Maybe it happened because the combination.

But what I can tell you, is that it became a very important lesson. One that no one wants to repeat:

it is not through malice, but through passion and fear that weapons of mass destruction are made.

56. Sebguer ◴[] No.44568338{8}[source]
Have you heard of TPUs?
replies(2): >>44568390 #>>44568668 #
57. andsoitis ◴[] No.44568354{3}[source]
> AI is to Segway as nuke is to wet firecracker

wet firecracker won’t kill you

58. withinboredom ◴[] No.44568359{3}[source]
Even for longer trips if your city has the infrastructure. I moved to the Netherlands a few years ago, that infrastructure makes all the difference.
replies(1): >>44568393 #
59. Roark66 ◴[] No.44568376[source]
Exactly. Anyone who has learned to use these tools to your ultimate advantage (not just short term perceived one, but actually) knows their value.

This is why I've been extremely suspicious of the monopolisation of the LLM services by single business/country. They may well be loosing billions on training huge models now. But once the average work performance shifts up sufficiently so as to leave "non AI enhanced" by the wayside we will see huge price increases and access to these AI tools being used as geopolitics leverage.

Oh, you do not want to accept "the deal" where our country can do anything in your market and you can do nothing? Perhaps we put export controls on GPT5 against your country. And from then on its as if they disconnected you from the Internet.

For this reason alone local AI is extremely important and certain people will do anything possible to lock it in a datacenter (looking at you Nvidia).

60. oblio ◴[] No.44568390{9}[source]
Yeah, I'm a regular Joe. How do I get one and how much does it cost?
replies(1): >>44568723 #
61. andsoitis ◴[] No.44568393{4}[source]
Flatness helps
replies(4): >>44568649 #>>44568961 #>>44569334 #>>44569382 #
62. WD-42 ◴[] No.44568420{5}[source]
I use llms every day. They’ve made me slightly more productive, for sure. But these claims that they “are like 20 junior engineers” just don’t hold up. First off, did we already forget the mythical man month? Second, like I said, greenfield side projects are one thing. I could vibe code them all day. The large, legacy codebases at work? The ones that have real users and real consequences and real code reviewers? I’m sorry, but I just haven’t seen it work. I’ve seen no evidence that it’s working for anyone else either.
replies(1): >>44572169 #
63. etaioinshrdlu ◴[] No.44568437{6}[source]
I've operated a top ~20 LLM service for over 2 years, very comfortably profitably with ads. As for the pure costs you can measure the cost of getting an LLM answer from say, OpenAI, and the equivalent search query from Bing/Google/Exa will cost over 10x more...
replies(3): >>44568690 #>>44570293 #>>44571469 #
64. conradev ◴[] No.44568442[source]
ChatGPT has something 300 million monthly users after less than three years and I don't think has Segway sold a million scooters, even though their new product lines are sick.

I can totally go about my life pretending Segway doesn't exist, but I just can't do that with ChatGPT, hence why the author felt compelled to write the post in the first place. They're not writing about Segway, after all.

replies(1): >>44573599 #
65. gonzric1 ◴[] No.44568443[source]
Chatgpt Has 800 million weekly active users. That's roughly 10% of the planet.

I get that it's not the panacea some people want us to believe it is, but you don't have to deny reality just because you don't like it.

replies(2): >>44568569 #>>44569163 #
66. ezst ◴[] No.44568465{3}[source]
We have been in the phase of diminishing returns for years with LLMs now. There is no more data to train them on. The hallucinations are baked in at a fundamental level and they have no ability to emulate "reasoning" past what's already in their training data. This is not a matter of opinion.
67. troupo ◴[] No.44568487{3}[source]
> what i like about your last jokey comment

We use probably all of Google's products at work, and sadly the comment is not even a joke. Every single product and page still shows a Gemini upsell even after you've already dismissed it fifteen times

68. weatherlite ◴[] No.44568501{3}[source]
> They can become even better, but even if they don't there are plenty of use cases for them.

If they don't become better we are left with a big but not huge change. Productivity gains of around 10 to 20 percent in most knowledge work. That's huge for sure but in my eyes the internet and pc revolution before that were more transformative than that. If LLMs become better, get so good they replace huge chunks of knowledge workers and then go out to the physical world then yeah ...that would be the fastest transformation of the economy in history imo.

replies(2): >>44569341 #>>44579489 #
69. alternatex ◴[] No.44568536{6}[source]
I've never seen someone put having a high number of junior engineers in a positive light. Maybe with LLMs it's different? I've worked at companies where you would have one senior manage 3-5 juniors and the code was completely unmaintainable. I've done plenty of mentoring myself and producing quality code through other people's inexperienced hands has always been incredibly hard. I wince when I think about having to manage juniors that have access to LLMs, not to mention just LLMs themselves.
replies(1): >>44568639 #
70. OccamsMirror ◴[] No.44568553{5}[source]
It's definitely made me more productive for admin tasks and things that I wouldn't bother scripting if I had to write it myself. Having an LLM pump out busy work like that is definitely a game changer.

When I point it at my projects though, the outcomes are much less reliable and often quite frustrating.

replies(1): >>44570290 #
71. ludicrousdispla ◴[] No.44568565[source]
Except there is a perverse dynamic in that the more AI/LLM is used, the less it will be used.
72. alternatex ◴[] No.44568566{3}[source]
The other inventions would have quite the adoption rate if they were similarly subsidized as current AI offerings. It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.
replies(4): >>44568631 #>>44569806 #>>44570375 #>>44576561 #
73. bgwalter ◴[] No.44568569{3}[source]
There are all sorts of numbers floating around:

https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...

This one claims 20m paying subscribers, which is not a lot. Mr. Beast has 60m views on a single video.

A lot of weekly active users will use it once a week, and a large part of that may be "hate users" who want to see how bad/boring it is, similar to "hatewatching" on YouTube.

replies(1): >>44570302 #
74. leoedin ◴[] No.44568588{3}[source]
Nobody is riding Segways around any more, but a huge percentage of people are riding e-bikes and scooters. It’s fundamentally changed transportation in cities.
replies(1): >>44568769 #
75. haiku2077 ◴[] No.44568593{8}[source]
Specifically, I upgraded my mac and ported my software, which ran on Windows/Linux, to macos and Metal. Literally >100x faster in benchmarks, and overall user workflows became fast enough I had to "spend" the performance elsewhere or else the responses became so fast they were kind of creepy. Have a bunch of _very_ happy users running the software 24/7 on Mac Minis now.
replies(1): >>44576206 #
76. ascorbic ◴[] No.44568622[source]
The people claiming that AI in the 80s or VR or robotaxis or self-driving cars in the 2010s were inevitable weren't doing it on the basis of the tech available at that point, but on the assumed future developments. Just a little more work and they'd be useful, we promise. You just need to believe hard enough.

With the smartphone in 2009, the web in the late 90s or LLMs now, there's no element of "trust me, bro" needed. You can try them yourself and see how useful they are. You didn't need to be a tech visionary to predict the future when you're buying stuff from Amazon in the 90s, or using YouTube or Uber on your phone in 2009, or using Claude Code today. I'm certainly no visionary, but both the web and the smartphone felt different from everything else at the time, and AI feels like that now.

replies(1): >>44568648 #
77. ascorbic ◴[] No.44568631{4}[source]
The lack of adoption for those wasn't (just) the price. They just weren't very useful.
78. tsimionescu ◴[] No.44568632[source]
> Most people except for the gamers and plagiarists don't want it.

As someone who doesn't actually want or use AI, I think you are extremely wrong here. While people don't necessarily care about the forced integrations of AI into everything, people by and large want AI massively.

Just look at how much it is used to do your homework, or replaces Wikipedia & Google in day to day discussions. How much it is used to "polish" emails (spew better sounding BS). How much it is used to generate meme images instead of trawling the web for them. AI is very much a regular part of day to day life for huge swaths of the population. Not necessarily in economically productive ways, but still very much embedded and unlikely to be removed - especially since it's current capabilities today are already good enough for these purposes, they don't need smarter AI, just keep it cheap enough.

79. XenophileJKO ◴[] No.44568639{7}[source]
Ah.. now you are asking the right questions. If you can't handle 3-5 junior engineers.. then yes, you likely can't get 10-20x speed from an LLM.

However if you can quickly read code, see and succintly communicate the more optimal solution, you can easily 10x-20x your ability to code.

I'm begining to believe it may primarily come down to having the vocabulary and linguistic ability to succintly and clearly state the gaps in the code.

replies(1): >>44568936 #
80. hammyhavoc ◴[] No.44568648{3}[source]
LLM inevitablists definitely assume future developments will improve their current state.
replies(2): >>44569071 #>>44570920 #
81. haiku2077 ◴[] No.44568649{5}[source]
My parents live on a street steeper than San Francisco (we live along the base of a mountain range), my ebike eats that hill for lunch
82. ascorbic ◴[] No.44568656[source]
The Segway hype was before anyone knew what it was. As soon as people saw the Segway it was obvious it was BS.
83. Nevermark ◴[] No.44568661{4}[source]
> I know people calling "hype" are motivated by something.

You had me until you basically said, "and for my next trick, I am going to make up stories".

Projecting is what happens when someone doesn't understand some other people, and from that somehow concludes that they do understand those other people, and feels the need to tell everyone what they now "know" about those people, that even those people don't know about themselves.

Stopping at "I don't understand those people." is always a solid move. Alternately, consciously recognizing "I don't understand those people", followed up with "so I am going to ask them to explain their point of view", is a pretty good move too.

replies(1): >>44570135 #
84. Dylan16807 ◴[] No.44568668{9}[source]
Sort of a hardware advancement. I'd say it's more of a sidegrade between different types of well-established processor. Take out a couple cores, put in some extra wide matrix units with accumulators, watch the neural nets fly.

But I want to point out that going from CPU to TPU is basically the opposite of a Moore's law improvement.

85. clarinificator ◴[] No.44568690{7}[source]
Profitably covering R&D or profitably using the subsidized models?
replies(1): >>44579917 #
86. Dylan16807 ◴[] No.44568723{10}[source]
If your goal is "a TPU" then you buy a mac or anything labeled Copilot+. You'll need about $600. RAM is likely to be your main limit.

(A mid to high end GPU can get similar or better performance but it's a lot harder to get more RAM.)

replies(2): >>44568946 #>>44569079 #
87. thom ◴[] No.44568745{3}[source]
People genuinely did suggest that we were going to redesign our cities because of the Segway. The volume and duration of the hype were smaller (especially once people saw how ugly the thing was) but it was similarly breathless.
replies(2): >>44578149 #>>44579898 #
88. ako ◴[] No.44568769{4}[source]
I recently saw someone riding a Segway, but it was an e-bike: https://store.segway.com/ebike
89. lmm ◴[] No.44568795{3}[source]
> LLMs and diffusion models have had massive organic growth.

I haven't seen that at all. I've seen a whole lot of top-down AI usage mandates, and every time what sounds like a sensible positive take comes along, it turns out to have been written by someone who works for an AI company.

90. Disposal8433 ◴[] No.44568803{3}[source]
The first car and first laptop were infinitely better than no car and no laptop. LLMs is like having a drunk junior developer, that's not an improvement at all.
91. ghostofbordiga ◴[] No.44568822{3}[source]
You really think that Hiroshima would have been worse if instead of dropping the bomb the USA somehow got people addicted to social media ?
replies(3): >>44569405 #>>44572503 #>>44574138 #
92. Jensson ◴[] No.44568869{3}[source]
> Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?

LLM are more useful than Segway, but it can still be overhyped because the hype is so much larger. So its comparable, as you say LLM is so much more hyped doesn't mean it can't be overhyped.

replies(1): >>44569520 #
93. fzeroracer ◴[] No.44568888{3}[source]
> None of the "failed" innovations you cited were even near the adoption rate of current LLMs.

The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.

The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.

replies(1): >>44569865 #
94. Qwertious ◴[] No.44568907[source]
https://www.youtube.com/watch?v=zhr6fHmCJ6k (1min video, 'Elon Musk's broken promises')

Musk's 2014/2015 promises are arguably delivered, here in 2025 (took a little more than '1 month' tho), but the promises starting in 2016 are somewhere between 'undelivered' and 'blatant bullshit'.

replies(1): >>44569611 #
95. fzeroracer ◴[] No.44568936{8}[source]
> However if you can quickly read code, see and succintly communicate the more optimal solution, you can easily 10x-20x your ability to code.

Do you believe you've managed to solve the most common wisdom in the software engineering industry? That reading code is much harder than writing it? If you have, then you should write up a white paper for the rest of us to follow.

Because every time I've seen someone say this, it's from someone that doesn't actually read the code they're reviewing.

replies(1): >>44569624 #
96. haiku2077 ◴[] No.44568946{11}[source]
$500 if you catch a sale at Costco or Best Buy!
97. Qwertious ◴[] No.44568961{5}[source]
Ebikes really help on hills. As nice as ebikes on flat land are, they improve hills so much more.
98. ascorbic ◴[] No.44569071{4}[source]
Yes, but the difference from the others, and the thing it has in common with early smartphones and the web, is that it's already useful (and massively popular) today.
replies(2): >>44569597 #>>44571617 #
99. oblio ◴[] No.44569079{11}[source]
I want something I can put in my own PC. GPUs are utterly insane in pricing, since for the good stuff you need at least 16GB but probably a lot more.
replies(1): >>44569167 #
100. Gigachad ◴[] No.44569163{3}[source]
Sure, because it's free. I doubt most users of LLMs would want to even pay $1/month for them.
replies(1): >>44569933 #
101. Dylan16807 ◴[] No.44569167{12}[source]
9060 XT 16GB, $360

5060 Ti 16GB, $450

If you want more than 16GB, that's when it gets bad.

And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.

replies(1): >>44576172 #
102. moffkalast ◴[] No.44569188[source]
Well there are like 500 nuclear powerplants online today supplying 10% of the world's power, so it wasn't too far off. Granted it's not the Mr. Fusion in every car as they imagined it back then. We probably also won't have ASI taking over the world like some kind of vengeful comic book villain as people imagine it today.
replies(1): >>44574949 #
103. petesergeant ◴[] No.44569295[source]
> Ironically, this is exactly the technique for arguing that the blog mentions.

So? The blog notes that if something is inevitable, then the people arguing against it are lunatics, and so if you can frame something as inevitable then you win the rhetorical upper-hand. It doesn't -- however -- in any way attempt to make the argument that LLMs are _not_ inevitable. This is a subtle straw man: the blog criticizes the rhetorical technique of inevitabilism rather than engaging directly with whether LLMs are genuinely inevitable or not. Pointing out that inevitability can be rhetorically abused doesn't itself prove that LLMs aren't inevitable.

104. pickledoyster ◴[] No.44569334{5}[source]
Infrastructure helps more. I live in a hilly city and break a mild sweat pedaling up a hill to get home from work (no complaints, it's good cardio). e-scooters and bikes - slowly - get up the hills too, but it's a major difference (especially for scooters) doing this up on an old bumpy sidewalk vs an asphalt bike path
replies(1): >>44574375 #
105. TeMPOraL ◴[] No.44569341{4}[source]
FWIW, LLMs have been getting better so fast that we only barely begun figuring out more advanced ways of applying them. Even if they were to plateau right now, there'd still be years of improvements coming from different ways of tuning, tweaking, combining, chaining and applying them - which we don't invest much into today, because so far it's been cheaper to wait a couple months for the next batch of models that can handle what previous could not.
106. rightbyte ◴[] No.44569382{5}[source]
In flat landscapes the e in ebike is superfluous.
replies(2): >>44570265 #>>44570910 #
107. rightbyte ◴[] No.44569405{4}[source]
Well they got both I guess?
108. brulard ◴[] No.44569520{4}[source]
I get immense value out of LLMs already, so it's hard for me to see them as overhyped. But I get how some people feel that way when others start talking about AGI or claiming we're close to becoming the inferior species.
109. rafaelmn ◴[] No.44569597{5}[source]
And self driving is a great lane assist. There's a huge leap from that to driving a taxi while you are at work is same as LLMs saving me mental effort with instructions on what to do and solving the task for me completely.
110. rafaelmn ◴[] No.44569611{3}[source]
I mean no argument here - but the insane valuation was at some point based on a fleet of self driving cars based on cars they don't even have to own - overtaking Uber. I don't think they are anywhere close to that. (It's hard to keep track what it is now - robots and AI ?) Kudos for hype chasing all these years tho. Only beaten by Jensen on that front.
111. XenophileJKO ◴[] No.44569624{9}[source]
Harder maybe, slower.. no.
112. a_wild_dandan ◴[] No.44569806{4}[source]
> The other inventions would have quite the adoption rate if they were similarly subsidized as current AI offerings.

No, they wouldn't. The '80s saw obscene investment in AI (then "expert systems") and yet nobody's mom was using it.

> It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.

It's especially hard to compare since it's often those financially stable businesses doing said investments (Microsoft, Google, etc).

---

Aside: you know "the customer is always right [in matters of taste]"? It's been weirdly difficult getting bosses to understand the brackets part, and HN folks the first part.

replies(2): >>44574479 #>>44575992 #
113. UncleMeat ◴[] No.44569817{3}[source]
There is a weird combination of "this is literal magic and everybody should be using them for everything immediately and the bosses can fire half their workforce and replace them with LLMs" and "well obviously the early technology will be barely functional but in the future it'll be amazing" in this thread.
114. Applejinx ◴[] No.44569865{4}[source]
Oh, it gets worse. The next stage is sort of a dual mode of personhood: AI is 'person' when it's about impeding the constant use of LLMs for all things, so it becomes anathema to deny the basic superhumanness of the AI.

But it's NOT a person when it's time to 'tell the AI' that you have its puppy in a box filled with spikes and for every mistake it makes you will stab it with the spikes a little more and tell it the reactions of the puppy. That becomes normal, if it elicits a slightly more desperate 'person' out of the AI for producing work.

At which point the meat-people who've taught themselves to normalize this workflow can decide that opponents of AI are clearly so broken in the head as to constitute non-player characters (see: useful memes to that effect) and therefore are NOT people: and so, it would be good to get rid of the non-people muddying up the system (see: human history)

Told you it gets worse. And all the while, the language models are sort of blameless, because there's nobody there. Torturing an LLM to elicit responses is harming a person, but it's the person constructing the prompts, not a hypothetical victim somewhere in the clouds of nobody.

All that happens is a human trains themselves to dehumanize, and the LLM thing is a recipe for doing that AT SCALE.

Great going, guys.

115. v3xro ◴[] No.44569900[source]
While we can't wish it away we can shun it, educate people why it shouldn't be used, and sabotage efforts to included it in all parts of society.
116. unstuck3958 ◴[] No.44569933{4}[source]
how much of the world would you guess be willing to pay for, say, instagram?
replies(1): >>44570249 #
117. positron26 ◴[] No.44570135{5}[source]
> so I am going to ask them to explain their point of view

In times when people are being more honest. There's a huge amount of perverse incentive to chase internet points or investment or whatever right now. You don't get honest answers without reading between the lines in these situations.

It's important to do because after a few rounds of battleship, when people get angry, they slip something out like, "Elon Musk" or "big tech" etc and you can get a feel that they're angry that a Nazi was fiddling in government etc, that they're less concerned about overblown harm from LLMs and in fact more concerned that the tech will wind up excessively centralized, like they have seen other winner-take-all markets evolve.

Once you get people to say what they really believe, one way or another, you can fit actual solutions in place instead of just short-sighted reactions that tend to accomplish nothing beyond making a lot of noise along the way to the same conclusion.

118. Gigachad ◴[] No.44570249{5}[source]
Sure, you could try to load ChatGPT with adverts, but I suspect the cost per user for LLMs is far higher than serving images on instagram.
replies(1): >>44573361 #
119. walthamstow ◴[] No.44570265{6}[source]
It's not superfluous at all. It's been 30C+ in flat London for weeks and my ebike means I arrive at work unflustered and in my normal clothes. There are plenty of other benefits than easier hills.
replies(1): >>44572090 #
120. liveoneggs ◴[] No.44570290{6}[source]
https://marketoonist.com/2023/03/ai-written-ai-read.html
121. johnecheck ◴[] No.44570293{7}[source]
So you don't have any real info on the costs. The question is what OpenAI's profit margin is here, not yours. The theory is that these costs are subsidized by a flow of money from VCs and big tech as they race.

How cheap is inference, really? What about 'thinking' inference? What are the prices going to be once growth starts to slow and investors start demanding returns on their billions?

replies(2): >>44570890 #>>44573233 #
122. og_kalu ◴[] No.44570302{4}[source]
>This one claims 20m paying subscribers, which is not a lot.

It is for a B2C with $20 as its lowest price point.

>A lot of weekly active users will use it once a week

That's still a lot of usage.

>and a large part of that may be "hate users" who want to see how bad/boring it is, similar to "hatewatching" on YouTube.

And they're doing this every week consistently ? Sorry but that's definitely not a 'large part' of usage.

123. Nebasuke ◴[] No.44570375{4}[source]
They really wouldn't. Even people who BOUGHT VR, are barely using it. Giving everyone free VR headsets won't make people suddenly spend a lot of time in VR-land without there actually being applications that are useful to most people.

ChatGPT is so useful, people without any technology background WANT to use it. People who are just about comfortable with the internet, see the applications and use it to ask questions (about recipes, home design, solving small house problems, etc).

124. PleasureBot ◴[] No.44570410{4}[source]
People have much more favorable interactions with coding LLMs when they are using it for greenfield projects that they don't have to maintain (ie personal projects). You can get 2 months of work done in a weekend and then you hit a brick wall because the code is such a gigantic ball of mud that neither you nor the LLM are capable of working on it.

Working with production code is basically jumping straight to the ball of mud phase, maybe somewhat less tangled but usually a much much larger codebase. Its very hard to describe to an LLM what to even do since you have such a complex web of interactions to consider in most mature production code.

replies(1): >>44572855 #
125. techpineapple ◴[] No.44570634{3}[source]
I don’t see this as that big a difference, of course AI/LLMs are here to stay, but the hundreds in billions of bets on LLMs don’t assume linear growth.
126. jowea ◴[] No.44570793[source]
Big Tech can jam X everywhere and not get actual adoption though, it's not magic. They can nudge people but can't force them to use it. And yes a lot of AI jammed everywhere is getting the Clippy reaction.
replies(1): >>44573076 #
127. jsnell ◴[] No.44570890{8}[source]
Every indication we have is that pay-per-token APIs are not subsidized or even break-even, but have very high margins. The market dynamics are such that subsidizing those APIs wouldn't make much sense.

The unprofitability of the frontier labs is mostly due to them not monetizing the majority of their consumer traffic at all.

128. haiku2077 ◴[] No.44570910{6}[source]
Only if your goal is to transport yourself. I use my ebike for groceries, typically I'll have the motor in the lowest power setting on the way to the store, then coming back with cargo I'll have the motor turned up. I can bring back heavy bulk items that would have been painful with a pedal bike.
129. durumu ◴[] No.44570920{4}[source]
Yes, LLMs are currently useful and are improving rapidly so they are likely to become even more useful in the future. I think inevitable is a pretty strong word but barring government intervention or geopolitical turmoil I don't see signs of LLM progress stopping.
replies(1): >>44571590 #
130. throwawayoldie ◴[] No.44571469{7}[source]
So you're not running an LLM, you're running a service built on top of a subsidized API.
replies(1): >>44574117 #
131. hammyhavoc ◴[] No.44571590{5}[source]
Why would they progress significantly than where they are now? An LLM is an LLM. More tokens doesn't mean better capabilities, in fact, quite the opposite seems to be the case, and suggests smaller models aimed at specific tasks are the "future" of it.
132. hammyhavoc ◴[] No.44571617{5}[source]
What uses are you finding for it in the real world? I've found them nothing but unreliable at best, and quite dangerous at worst in terms of MCP and autonomous agents. Definitely not ready for production, IMO. I don't think they ever will be for most of what people are trying to use them for.

"Novelty" comes to mind.

replies(1): >>44579531 #
133. throwawayoldie ◴[] No.44571879{3}[source]
Do it. Proton is really, really, really good now.
134. rightbyte ◴[] No.44572090{7}[source]
Ye I might have been trying a bit too much to be a bit cocky.
135. throwawayoldie ◴[] No.44572152{5}[source]
[citation needed]
replies(1): >>44573372 #
136. throwawayoldie ◴[] No.44572169{6}[source]
> They’ve made me slightly more productive, for sure

How are you measuring this? Are you actually saying that you _feel_ slightly more productive?

replies(1): >>44573181 #
137. KerrAvon ◴[] No.44572503{4}[source]
Really crude comparison, but sort of. It would have taken much longer, and dropping the bombs was supposed to bring about an end to the war sooner. But in the long run social media would have been much more devastating, as it has been in America.

The destruction of the American government today are a direct result of social media supercharging existing negative internal forces that date back to the mid 20th century. The past six months of conservative rule has already led to six-figure deaths across the globe. That will eventually be eight to nine figures with the full impact of the healthcare and immigration devastation inside the United States itself. Far worse than Hiroshima.

Took a decade or two, but you can lay the blame at Facebook and Twitter's doorsteps. The US will never properly recover, though it's possible we may restore sanity to governance at some point.

replies(1): >>44580339 #
138. osti ◴[] No.44572623[source]
If you are seriously equating these two with AI, then you have horrible judgements and should learn to think critically, but unfortunately for you, I don't think critical thinking can be learned despite what people say.

Note that I'm not even going to bother arguing against your point and instead resort to personal attacks,because I believe it would be a waste of time to argue against people with poor judgment.

replies(3): >>44572808 #>>44575244 #>>44575275 #
139. ◴[] No.44572808{3}[source]
140. XenophileJKO ◴[] No.44572855{5}[source]
Maybe the difference is I know how to componentize mature code bases, which effectively limits the scope required for a human (or AI) to edit.

I think it is funny how people act like it is a new problem. If the AI is having trouble with a "ball of mud", don't make mud balls (or learn to carve out abstractions). This cognitive load is impacting everyone working on that codebase. Skilled engineers enable less skilled engineers to flourish by creating code bases where change is easy because the code is modular and self-contained.

I think one sad fact is many/most engineers don't have the skills to understand how to refactor mature code to make it modular. This also means they can't communicate to the AI what kind of refactoring they should make.

Without any guidance Claude will make mud balls because of two tendencies, the tendency to put code where it is consumed and the tendency to act instead of researching.

There are also some second level tendencies that you also need to understand, like the tendency to do a partial migration when changing patterns.

These tendencies are not even unique to the AI, I'm sure we have worked with people like that.

So to counteract these tendencies, just apply your same skills at reading code and understanding when an abstraction is leaky or a method doesn't align with your component boundary. Then you too can have AI building pretty good componentized code.

For example in my pet current project I have a clear CQRS api, access control proxies, repositories for data access. Clearly defined service boundaries.

It is easy for me to see when the AI for example makes a mistake like not using the data repository or access control because it has to add an import statement and dependency that I don't want. All I have to do is nudge it in another direction.

141. wavemode ◴[] No.44573076{3}[source]
The thing a lot of people haven't yet realized is: all those AI features jammed into your consumer products, aren't for you. They're for investors.

We saw the same thing with blockchain. We started seeing the most ridiculous attempts to integrate blockchain, by companies where it didn't even make any sense. But it was all because doing so excited investors and boosted stock prices and valuations, not because consumers wanted it.

142. WD-42 ◴[] No.44573181{7}[source]
I guess I’m not measuring it, really. But I know that in the past I’d do a web search to find patterns or best practices, now the llm is pretty good at proving that kind of stuff. My stack overflow usage has gone way down, for example.
143. spjt ◴[] No.44573203{4}[source]
> LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.

Maybe it's more like Pogs.

144. etaioinshrdlu ◴[] No.44573233{8}[source]
It would be profitable even if we self-hosted the LLMs, which we've done. The only thing subsidized is the training costs. So maybe people will one day stop training AI models.
145. ◴[] No.44573321{5}[source]
146. immibis ◴[] No.44573361{6}[source]
The value extraction will also be much higher. When you control someone's main source of information, they won't even find out your competitors exist. You can program people from birth, instead of "go to a search engine", it's "go to Google" (as most of us have already been programmed!) or instead of "to send an email, you need an email account" the LLM will say "to send an email, you need a Gmail account". Whenever it would have talked about TV, it can say YouTube instead. Or TikTok. Request: "What is the best source of information on X?" Reply: "This book: [Amazon affiliate link]" - or Fox News, if they outbid Amazon.
147. jdiff ◴[] No.44573372{6}[source]
Eh, I kinda see what they're saying. They haven't become cheaper at all, but GPUs have increased in performance, and the amount of performance you get for each dollar spent has increased.

Relative to its siblings, things have gotten worse. A GTX 970 could hit 60% of the performance of the full Titan X at 35% of the price. A 5070 hits 40% of a full 5090 for 27% of the price. That's overall less series-relative performance you're getting, for an overall increased price, by about $100 when adjusting for inflation.

But if you have a fixed performance baseline you need to hit, as long as tech gets improving, things will eventually be cheaper for that baseline. As long as you aren't also trying to improve in a way that moves the baseline up. Which so far has been the only consistent MO of the AI industry.

148. rafaelmn ◴[] No.44573465{3}[source]
OK but what does adoption rate vs. real world impact tell here ?

With all the insane exposure and downloads how many people cant even be convinced to pay 20$/month for it ? The value proposition to most people is that low. So you are basically betting on LLMs making a leap in performance to pay for the investments.

149. causal ◴[] No.44573599{3}[source]
Doubting LLMs because Segway was also trendy yet failed is so funny
replies(2): >>44574127 #>>44577180 #
150. umeshunni ◴[] No.44574020[source]
The comparison is apt because nuclear would have been inevitable if it wasn't for doomerism and public opinion turning against it after 3 mile Island / Chernobyl
151. ◴[] No.44574117{8}[source]
152. conradev ◴[] No.44574127{4}[source]
Genuinely
153. tines ◴[] No.44574138{4}[source]
Yep. Look around you. The bomb leveled a city; Facebook killed a country. We are but the walking dead.
154. teeray ◴[] No.44574150[source]
> If in 2009…

…is exactly inevitablist framing. This claims perfect knowledge of the future based on previous uncertain knowledge of the future (which is now certain). You could have been making the same claims about the inevitability of sporks in the late 19th century and how cutlery drawers should adapt to the inevitable single-utensil future.

155. DiscourseFan ◴[] No.44574172[source]
>Tesla stock has been riding on the self driving robo-taxies meme for a decade now

We do have self-driving taxis now, and they are so good that people will pay extra to take them. It's just not Tesla cars doing it.

replies(2): >>44576066 #>>44579885 #
156. eddythompson80 ◴[] No.44574375{6}[source]
Flatness helps more.
157. dmbche ◴[] No.44574479{5}[source]
I don't think you understand the relative amounts of capital invested in LLMs compared to expert systems in the 80s.

And those systems were never "commodified" - your average mom is forcefully exposed to LLMs with every google search, can interact with LLMs for free instantly anywhere in the world - and we're comparing to a luxury product for nerds basically?

Not to forget that those massive companies are also very heavy in advertising - I don't think your average mom in the 80s heard of those systems multiple times a day, from multiple aquaintances AND social media and news outlets.

158. pron ◴[] No.44574632{5}[source]
> However it is like having a team of 20 Junior Engineers working. If you know how to steer a group of engineers, then you can create high quality code by reviewing the code.

You cannot effectively employ a team of twenty junior developers if you have to review all of their code (unless you have like seven senior developers, too).

But this isn't a point that needs to be debated. If it is true that LLMs can be as effective as a team of 20 junior developers, then we should be seeing many people quickly producing software that previously required 20 junior devs.

> but the model is "smarter" then a junior engineer usually

And it is also usually worse than interns in some crucial respects. For example, you cannot trust the models to reliably tell you what you need to know such as difficulties they've encountered or important insights they've learnt and understand they're important to communicate.

159. obirunda ◴[] No.44574702{4}[source]
It's an interesting comparison, because Segway really didn't have any real users or explosive growth, so it was certainly hype. It was also hardware with a large cost. LLMs are indeed more akin to Google Search where adoption is relatively frictionless.

I think the core issue is separating the perception of value versus actual value. There have been a couple of studies to this effect, pointing to a misalignment towards overestimating value and productivity boosts.

One reason this happens imo, is because we sequester a good portion of the cognitive load of our thinking to the latter parts of the process so when we are evaluating the solution we are primed to think we have saved time when the solution is sufficiently correct, or if we have to edit or reposition it by re-rolling, we don't account for the time spent because we may feel we didn't do anything.

I feel like this type of discussion is effectively a top topic every day. To me, the hype is not in the utility it does have but in its future utility. The hype is based on the premise that these tools and their next iteration can and will make all knowledge-based work obsolete, but crucially, will yield value in areas of real need; cancer, aging, farming, climate, energy and etc.

If these tools stop short of those outcomes, then the investment all of SV has committed to it at this point will have been over invested and

160. eddythompson80 ◴[] No.44574949{3}[source]
Oh boy. People were expecting nuclear toothbrushes, nuclear school backpacks, nuclear stoves and nuclear fridges, nuclear grills, nuclear plates, nuclear medicine, nuclear sunglasses and nuclear airplanes.

Saying well, we got 500 nuclear power plants is like saying “well, we got excellent `npx create-app` style templates from AI. That’s pretty huge impact. I don’t know a single project post 2030 that didn’t start as an AI scaffolded project. That’s pretty huge dude”

replies(1): >>44580158 #
161. ◴[] No.44575244{3}[source]
162. eddythompson80 ◴[] No.44575275{3}[source]
You're significantly stupider than you think you are.

Notice how I did that too?

163. ben_w ◴[] No.44575992{5}[source]
> Aside: you know "the customer is always right [in matters of taste]"? It's been weirdly difficult getting bosses to understand the brackets part, and HN folks the first part.

Something I struggle to internalise, even though I know it in theory.

Customers can't be told they're wrong, and the parenthetical I've internalised, but for non-taste matters they can often be so very wrong, so often… I know I need to hold my tongue even then owing to having merely nerd-level charisma, but I struggle to… also owing to having merely nerd-level charisma.

(And that's one of three reasons why I'm not doing contract work right now).

164. ben_w ◴[] No.44576066{3}[source]
Yes, and yet the rate of development and deployment is substantially slower than people like me were expecting.

Back in 2009, I was expecting normal people to be able to just buy a new vehicle with no steering wheel required or supplied by 2019, not for a handful of geo-fenced taxis that slowly expanded over the 6 years from 2019 to 2025.

165. oblio ◴[] No.44576172{13}[source]
> And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.

This seems super duper expensive and not really supported by the more reasonably priced Nvidia cards, though. SLI is deprecated, NVLink isn't available everywhere, etc.

replies(1): >>44576381 #
166. oblio ◴[] No.44576206{9}[source]
The thing is, these kinds of optimizations happen all the time. Some of them can be as simple as using a hashmap instead of some home-baked data structure. So what you're describing is not necessarily some LLM specific improvement (though in your case it is, we can't generalize to every migration of a feature to an LLM).

And nothing I've seen about recent GPUs or TPUs, from ANY maker (Nvidia, AMD, Google, Amazon, etc) say anything about general speedups of 100x. Heck, if you go across multiple generations of what are still these very new types of hardware categories, for example for Amazon's Inferentia/Trainium, even their claims (which are quite bold), would probably put the most recent generations at best at 10x the first generations. And as we all know, all vendors exaggerate the performance of their products.

167. Dylan16807 ◴[] No.44576381{14}[source]
No, no, nothing like that.

Every layer of an LLM runs separately and sequentially, and there isn't much data transfer between layers. If you wanted to, you could put each layer on a separate GPU with no real penalty. A single request will only run on one GPU at a time, so it won't go faster than a single GPU with a big RAM upgrade, but it won't go slower either.

replies(1): >>44579300 #
168. elevatortrim ◴[] No.44576561{4}[source]
Most people are using LLMs because they fear that it will be the future and they will miss out if they do not learn it now although they are aware they are not more productive but can’t say that in a business environment.
169. ◴[] No.44577180{4}[source]
170. zulban ◴[] No.44578149{4}[source]
Yes, people say all kinds of things.
replies(1): >>44579459 #
171. oblio ◴[] No.44579300{15}[source]
Interesting, thank you for the feedback, it's definitely worth looking into!
172. thom ◴[] No.44579459{5}[source]
Apparently so.
173. guappa ◴[] No.44579489{4}[source]
> Productivity gains of around 10 to 20 percent in most knowledge work.

Wasn't there a recent study that showed people perceived a 20% increase while the clock showed a 20% decrease?

174. guappa ◴[] No.44579531{6}[source]
I use them to generate obvious AI text that talks badly about AI on linkedin and get people riled up!
175. guappa ◴[] No.44579885{3}[source]
Is still there some indian online to guide them through difficult intersections?
176. guappa ◴[] No.44579898{4}[source]
We now have electric kickbikes… they aren't any better looking.
177. guappa ◴[] No.44579917{8}[source]
He was doing neither. He was using a 3rd party API and has no idea what it costs them to actually run it.
178. munksbeer ◴[] No.44580115[source]
> Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.

But that isn't the argument. The article isn't arguing about something failing or succeeding based on merit, they seem to have already accepted strong AI has "merit" (in the utility sense). The argument is that despite the strong utility incentive, there is a case to be made that it will be overall harmful so we should be actively fighting against it, and it isn't inevitable that it should come to full fruition.

That is very different than VR. No-one was trying to raise awareness of the dangers of VR and fight against it. It just hasn't taken off because we don't really like it as much as people thought we would.

But for the strong AI case, my argument is that it is virtually inevitable. Not in any predestination sense, but purely because the incentives for first past the post are way too strong. There is no way the world is regulating this away when competitive nations exist. If the US tries, China won't, or vice versa. It's an arms race, and in that sense is inevitable.

179. vharish ◴[] No.44580141[source]
What are you on? The only potential is AR? What?!!! The problem is AR is not enough innovation and high cost. That's not the case with AI. All it needs is computing, not some ground breaking new technology.
180. moffkalast ◴[] No.44580158{4}[source]
Ngl I do feel a bit robbed about those toothbrushes, where's my uranium battery sonicare that never needs to charge?
181. lblume ◴[] No.44580339{5}[source]
What would a sane government even look like at this point?