Most active commenters
  • haiku2077(7)
  • oblio(6)
  • (5)
  • Dylan16807(4)
  • etaioinshrdlu(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1616 points SwoopsFromAbove | 74 comments | | HN request time: 0.003s | source | bottom
Show context
delichon ◴[] No.44567913[source]
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
replies(17): >>44567949 #>>44567951 #>>44567961 #>>44567992 #>>44568002 #>>44568006 #>>44568029 #>>44568031 #>>44568040 #>>44568057 #>>44568062 #>>44568090 #>>44568323 #>>44568376 #>>44568565 #>>44569900 #>>44574150 #
1. NBJack ◴[] No.44567951[source]
Ironically, this is exactly the technique for arguing that the blog mentions.

Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?

replies(12): >>44567966 #>>44567973 #>>44567981 #>>44567984 #>>44567993 #>>44568067 #>>44568093 #>>44568163 #>>44568336 #>>44568442 #>>44568656 #>>44569295 #
2. HPsquared ◴[] No.44567966[source]
1. The Segway had very low market penetration but a lot of PR. LLMs and diffusion models have had massive organic growth.

2. Segways were just ahead of their time: portable lithium-ion powered urban personal transportation is getting pretty big now.

replies(3): >>44568065 #>>44568101 #>>44568795 #
3. godelski ◴[] No.44567973[source]
I think about the Segway a lot. It's a good example. Man, what a wild time. Everyone was so excited and it was held in mystery for so long. People had tried it in secret and raved about it on television. Then... they showed it... and... well...

I got to try one once. It was very underwhelming...

replies(2): >>44568167 #>>44568210 #
4. zulban ◴[] No.44567981[source]
> Remember ...

No, I don't remember it like that. Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?

You don't. I love the argument ad absurdum more than most but you've taken it a teensy bit too far.

replies(2): >>44568745 #>>44568869 #
5. antonvs ◴[] No.44567984[source]
That was marketing done before the nature of the device was known. The situation with LLMs is very different, really not at all comparable.
6. delichon ◴[] No.44567993[source]
I remember the Segway hype well. And I think AI is to Segway as nuke is to wet firecracker.
replies(1): >>44568354 #
7. jdiff ◴[] No.44568065[source]
Massive, organic, and unprofitable. And as soon as it's no longer free, as soon as the VC funding can no longer sustain it, an enormous fraction of usage and users will all evaporate.

The Segway always had a high barrier to entry. Currently for ChatGPT you don't even need an account, and everyone already has a Google account.

replies(2): >>44568094 #>>44568113 #
8. johnfn ◴[] No.44568067[source]
Oh yeah I totally remember Segway hitting a 300B valuation after a couple of years.
9. ◴[] No.44568093[source]
10. lumost ◴[] No.44568094{3}[source]
The free tiers might be tough to sustain, but it’s hard to imagine that they are that problematic for OpenAI et al. GPUs will become cheaper, and smaller/faster models will reach the same level of capability.
replies(2): >>44572152 #>>44573321 #
11. DonHopkins ◴[] No.44568101[source]
That's funny, I remember seeing "IT" penetrate Mr. Garrison.

https://www.youtube.com/watch?v=SK362RLHXGY

Hey, it still beats what you go through at the airports.

12. etaioinshrdlu ◴[] No.44568113{3}[source]
This is wrong because LLMs are cheap enough to run profitably on ads alone (search style or banner ad style) for over 2 years now. And they are getting cheaper over time for the same quality.

It is even cheaper to serve an LLM answer than call a web search API!

Zero chance all the users evaporate unless something much better comes along, or the tech is banned, etc...

replies(1): >>44568161 #
13. scubbo ◴[] No.44568161{4}[source]
> LLMs are cheap enough to run profitably on ads alone

> It is even cheaper to serve an LLM answer than call a web search API

These, uhhhh, these are some rather extraordinary claims. Got some extraordinary evidence to go along with them?

replies(2): >>44568184 #>>44568437 #
14. haiku2077 ◴[] No.44568163[source]
> Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?

Counterpoint: That's how I feel about ebikes and escooters right now.

Over the weekend, I needed to go to my parent's place for brunch. I put on my motorcycle gear, grabbed my motorcycle keys, went to my garage, and as I was about to pull out my BMW motorcycle (MSRP ~$17k), looked at my Ariel ebike (MSRP ~$2k) and decided to ride it instead. For short trips they're a game changing mode of transport.

replies(1): >>44568359 #
15. anovikov ◴[] No.44568167[source]
Problem with Segway was that it was made in USA and thus was absurdly, laughably expensive, it cost the same as a good used car and top versions, as a basic new car. Once a small bunch of rich people all bought one, it was over. China simply wasn't in position at a time yet to copycat and mass-produce it cheaply, and hype cycles usually don't repeat so by the time it could, it was too late. If it was invented 10 years later we'd all ride $1000-$2000 Segways today.
replies(1): >>44568206 #
16. haiku2077 ◴[] No.44568184{5}[source]
https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch..., also note the "objections" section

Anecdotally thanks to hardware advancements the locally-run AI software I develop has gotten more than 100x faster in the past year thanks to Moore's law

replies(2): >>44568256 #>>44568289 #
17. haiku2077 ◴[] No.44568206{3}[source]
> If it was invented 10 years later we'd all ride $1000-$2000 Segways today.

I chat with the guy who works nights at my local convenience store about our $1000-2000 e-scooters. We both use them more than we use our cars.

18. positron26 ◴[] No.44568210[source]
I'm going to hold onto the Segway as an actual instance of hype the next time someone calls LLMs "hype".

LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.

Calling LLMs "hype" is an example of cope, judging facts based on what is hoped to be true even in the face of overwhelming evidence or even self-evident imminence to the contrary.

I know people calling "hype" are motivated by something. Maybe it is a desire to contain the inevitable harm of any huge rollout or to slow down the disruption. Maybe it's simply the egotistical instinct to be contrarian and harvest karma while we can still feign to be debating shadows on the wall. I just want to be up front. It's not hype. Few people calling "hype" can believe that this is hype and anyone who does believes it simply isn't credible. That won't stop people from jockeying to protect their interests, hoping that some intersubjective truth we manufacture together will work in their favor, but my lord is the "hype" bandwagon being dishonest these days.

replies(3): >>44568661 #>>44573203 #>>44574702 #
19. oblio ◴[] No.44568256{6}[source]
What hardware advancement? There's hardly any these days... Especially not for this kind of computing.
replies(2): >>44568338 #>>44568593 #
20. ◴[] No.44568289{6}[source]
21. ako ◴[] No.44568336[source]
Trend vs single initiative. One company failed but overall personal electric transportation is booming is cities. AI is the future, but along the way many individual companies doing AI will fail. Cars are here to stay, but many individual car companies have and will fail, same for phones, everyone has a mobile phone, but nokia still failed…
replies(1): >>44568588 #
22. Sebguer ◴[] No.44568338{7}[source]
Have you heard of TPUs?
replies(2): >>44568390 #>>44568668 #
23. andsoitis ◴[] No.44568354[source]
> AI is to Segway as nuke is to wet firecracker

wet firecracker won’t kill you

24. withinboredom ◴[] No.44568359[source]
Even for longer trips if your city has the infrastructure. I moved to the Netherlands a few years ago, that infrastructure makes all the difference.
replies(1): >>44568393 #
25. oblio ◴[] No.44568390{8}[source]
Yeah, I'm a regular Joe. How do I get one and how much does it cost?
replies(1): >>44568723 #
26. andsoitis ◴[] No.44568393{3}[source]
Flatness helps
replies(4): >>44568649 #>>44568961 #>>44569334 #>>44569382 #
27. etaioinshrdlu ◴[] No.44568437{5}[source]
I've operated a top ~20 LLM service for over 2 years, very comfortably profitably with ads. As for the pure costs you can measure the cost of getting an LLM answer from say, OpenAI, and the equivalent search query from Bing/Google/Exa will cost over 10x more...
replies(3): >>44568690 #>>44570293 #>>44571469 #
28. conradev ◴[] No.44568442[source]
ChatGPT has something 300 million monthly users after less than three years and I don't think has Segway sold a million scooters, even though their new product lines are sick.

I can totally go about my life pretending Segway doesn't exist, but I just can't do that with ChatGPT, hence why the author felt compelled to write the post in the first place. They're not writing about Segway, after all.

replies(1): >>44573599 #
29. leoedin ◴[] No.44568588[source]
Nobody is riding Segways around any more, but a huge percentage of people are riding e-bikes and scooters. It’s fundamentally changed transportation in cities.
replies(1): >>44568769 #
30. haiku2077 ◴[] No.44568593{7}[source]
Specifically, I upgraded my mac and ported my software, which ran on Windows/Linux, to macos and Metal. Literally >100x faster in benchmarks, and overall user workflows became fast enough I had to "spend" the performance elsewhere or else the responses became so fast they were kind of creepy. Have a bunch of _very_ happy users running the software 24/7 on Mac Minis now.
replies(1): >>44576206 #
31. haiku2077 ◴[] No.44568649{4}[source]
My parents live on a street steeper than San Francisco (we live along the base of a mountain range), my ebike eats that hill for lunch
32. ascorbic ◴[] No.44568656[source]
The Segway hype was before anyone knew what it was. As soon as people saw the Segway it was obvious it was BS.
33. Nevermark ◴[] No.44568661{3}[source]
> I know people calling "hype" are motivated by something.

You had me until you basically said, "and for my next trick, I am going to make up stories".

Projecting is what happens when someone doesn't understand some other people, and from that somehow concludes that they do understand those other people, and feels the need to tell everyone what they now "know" about those people, that even those people don't know about themselves.

Stopping at "I don't understand those people." is always a solid move. Alternately, consciously recognizing "I don't understand those people", followed up with "so I am going to ask them to explain their point of view", is a pretty good move too.

replies(1): >>44570135 #
34. Dylan16807 ◴[] No.44568668{8}[source]
Sort of a hardware advancement. I'd say it's more of a sidegrade between different types of well-established processor. Take out a couple cores, put in some extra wide matrix units with accumulators, watch the neural nets fly.

But I want to point out that going from CPU to TPU is basically the opposite of a Moore's law improvement.

35. clarinificator ◴[] No.44568690{6}[source]
Profitably covering R&D or profitably using the subsidized models?
replies(1): >>44579917 #
36. Dylan16807 ◴[] No.44568723{9}[source]
If your goal is "a TPU" then you buy a mac or anything labeled Copilot+. You'll need about $600. RAM is likely to be your main limit.

(A mid to high end GPU can get similar or better performance but it's a lot harder to get more RAM.)

replies(2): >>44568946 #>>44569079 #
37. thom ◴[] No.44568745[source]
People genuinely did suggest that we were going to redesign our cities because of the Segway. The volume and duration of the hype were smaller (especially once people saw how ugly the thing was) but it was similarly breathless.
replies(2): >>44578149 #>>44579898 #
38. ako ◴[] No.44568769{3}[source]
I recently saw someone riding a Segway, but it was an e-bike: https://store.segway.com/ebike
39. lmm ◴[] No.44568795[source]
> LLMs and diffusion models have had massive organic growth.

I haven't seen that at all. I've seen a whole lot of top-down AI usage mandates, and every time what sounds like a sensible positive take comes along, it turns out to have been written by someone who works for an AI company.

40. Jensson ◴[] No.44568869[source]
> Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?

LLM are more useful than Segway, but it can still be overhyped because the hype is so much larger. So its comparable, as you say LLM is so much more hyped doesn't mean it can't be overhyped.

replies(1): >>44569520 #
41. haiku2077 ◴[] No.44568946{10}[source]
$500 if you catch a sale at Costco or Best Buy!
42. Qwertious ◴[] No.44568961{4}[source]
Ebikes really help on hills. As nice as ebikes on flat land are, they improve hills so much more.
43. oblio ◴[] No.44569079{10}[source]
I want something I can put in my own PC. GPUs are utterly insane in pricing, since for the good stuff you need at least 16GB but probably a lot more.
replies(1): >>44569167 #
44. Dylan16807 ◴[] No.44569167{11}[source]
9060 XT 16GB, $360

5060 Ti 16GB, $450

If you want more than 16GB, that's when it gets bad.

And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.

replies(1): >>44576172 #
45. petesergeant ◴[] No.44569295[source]
> Ironically, this is exactly the technique for arguing that the blog mentions.

So? The blog notes that if something is inevitable, then the people arguing against it are lunatics, and so if you can frame something as inevitable then you win the rhetorical upper-hand. It doesn't -- however -- in any way attempt to make the argument that LLMs are _not_ inevitable. This is a subtle straw man: the blog criticizes the rhetorical technique of inevitabilism rather than engaging directly with whether LLMs are genuinely inevitable or not. Pointing out that inevitability can be rhetorically abused doesn't itself prove that LLMs aren't inevitable.

46. pickledoyster ◴[] No.44569334{4}[source]
Infrastructure helps more. I live in a hilly city and break a mild sweat pedaling up a hill to get home from work (no complaints, it's good cardio). e-scooters and bikes - slowly - get up the hills too, but it's a major difference (especially for scooters) doing this up on an old bumpy sidewalk vs an asphalt bike path
replies(1): >>44574375 #
47. rightbyte ◴[] No.44569382{4}[source]
In flat landscapes the e in ebike is superfluous.
replies(2): >>44570265 #>>44570910 #
48. brulard ◴[] No.44569520{3}[source]
I get immense value out of LLMs already, so it's hard for me to see them as overhyped. But I get how some people feel that way when others start talking about AGI or claiming we're close to becoming the inferior species.
49. positron26 ◴[] No.44570135{4}[source]
> so I am going to ask them to explain their point of view

In times when people are being more honest. There's a huge amount of perverse incentive to chase internet points or investment or whatever right now. You don't get honest answers without reading between the lines in these situations.

It's important to do because after a few rounds of battleship, when people get angry, they slip something out like, "Elon Musk" or "big tech" etc and you can get a feel that they're angry that a Nazi was fiddling in government etc, that they're less concerned about overblown harm from LLMs and in fact more concerned that the tech will wind up excessively centralized, like they have seen other winner-take-all markets evolve.

Once you get people to say what they really believe, one way or another, you can fit actual solutions in place instead of just short-sighted reactions that tend to accomplish nothing beyond making a lot of noise along the way to the same conclusion.

50. walthamstow ◴[] No.44570265{5}[source]
It's not superfluous at all. It's been 30C+ in flat London for weeks and my ebike means I arrive at work unflustered and in my normal clothes. There are plenty of other benefits than easier hills.
replies(1): >>44572090 #
51. johnecheck ◴[] No.44570293{6}[source]
So you don't have any real info on the costs. The question is what OpenAI's profit margin is here, not yours. The theory is that these costs are subsidized by a flow of money from VCs and big tech as they race.

How cheap is inference, really? What about 'thinking' inference? What are the prices going to be once growth starts to slow and investors start demanding returns on their billions?

replies(2): >>44570890 #>>44573233 #
52. jsnell ◴[] No.44570890{7}[source]
Every indication we have is that pay-per-token APIs are not subsidized or even break-even, but have very high margins. The market dynamics are such that subsidizing those APIs wouldn't make much sense.

The unprofitability of the frontier labs is mostly due to them not monetizing the majority of their consumer traffic at all.

53. haiku2077 ◴[] No.44570910{5}[source]
Only if your goal is to transport yourself. I use my ebike for groceries, typically I'll have the motor in the lowest power setting on the way to the store, then coming back with cargo I'll have the motor turned up. I can bring back heavy bulk items that would have been painful with a pedal bike.
54. throwawayoldie ◴[] No.44571469{6}[source]
So you're not running an LLM, you're running a service built on top of a subsidized API.
replies(1): >>44574117 #
55. rightbyte ◴[] No.44572090{6}[source]
Ye I might have been trying a bit too much to be a bit cocky.
56. throwawayoldie ◴[] No.44572152{4}[source]
[citation needed]
replies(1): >>44573372 #
57. spjt ◴[] No.44573203{3}[source]
> LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.

Maybe it's more like Pogs.

58. etaioinshrdlu ◴[] No.44573233{7}[source]
It would be profitable even if we self-hosted the LLMs, which we've done. The only thing subsidized is the training costs. So maybe people will one day stop training AI models.
59. ◴[] No.44573321{4}[source]
60. jdiff ◴[] No.44573372{5}[source]
Eh, I kinda see what they're saying. They haven't become cheaper at all, but GPUs have increased in performance, and the amount of performance you get for each dollar spent has increased.

Relative to its siblings, things have gotten worse. A GTX 970 could hit 60% of the performance of the full Titan X at 35% of the price. A 5070 hits 40% of a full 5090 for 27% of the price. That's overall less series-relative performance you're getting, for an overall increased price, by about $100 when adjusting for inflation.

But if you have a fixed performance baseline you need to hit, as long as tech gets improving, things will eventually be cheaper for that baseline. As long as you aren't also trying to improve in a way that moves the baseline up. Which so far has been the only consistent MO of the AI industry.

61. causal ◴[] No.44573599[source]
Doubting LLMs because Segway was also trendy yet failed is so funny
replies(2): >>44574127 #>>44577180 #
62. ◴[] No.44574117{7}[source]
63. conradev ◴[] No.44574127{3}[source]
Genuinely
64. eddythompson80 ◴[] No.44574375{5}[source]
Flatness helps more.
65. obirunda ◴[] No.44574702{3}[source]
It's an interesting comparison, because Segway really didn't have any real users or explosive growth, so it was certainly hype. It was also hardware with a large cost. LLMs are indeed more akin to Google Search where adoption is relatively frictionless.

I think the core issue is separating the perception of value versus actual value. There have been a couple of studies to this effect, pointing to a misalignment towards overestimating value and productivity boosts.

One reason this happens imo, is because we sequester a good portion of the cognitive load of our thinking to the latter parts of the process so when we are evaluating the solution we are primed to think we have saved time when the solution is sufficiently correct, or if we have to edit or reposition it by re-rolling, we don't account for the time spent because we may feel we didn't do anything.

I feel like this type of discussion is effectively a top topic every day. To me, the hype is not in the utility it does have but in its future utility. The hype is based on the premise that these tools and their next iteration can and will make all knowledge-based work obsolete, but crucially, will yield value in areas of real need; cancer, aging, farming, climate, energy and etc.

If these tools stop short of those outcomes, then the investment all of SV has committed to it at this point will have been over invested and

66. oblio ◴[] No.44576172{12}[source]
> And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.

This seems super duper expensive and not really supported by the more reasonably priced Nvidia cards, though. SLI is deprecated, NVLink isn't available everywhere, etc.

replies(1): >>44576381 #
67. oblio ◴[] No.44576206{8}[source]
The thing is, these kinds of optimizations happen all the time. Some of them can be as simple as using a hashmap instead of some home-baked data structure. So what you're describing is not necessarily some LLM specific improvement (though in your case it is, we can't generalize to every migration of a feature to an LLM).

And nothing I've seen about recent GPUs or TPUs, from ANY maker (Nvidia, AMD, Google, Amazon, etc) say anything about general speedups of 100x. Heck, if you go across multiple generations of what are still these very new types of hardware categories, for example for Amazon's Inferentia/Trainium, even their claims (which are quite bold), would probably put the most recent generations at best at 10x the first generations. And as we all know, all vendors exaggerate the performance of their products.

68. Dylan16807 ◴[] No.44576381{13}[source]
No, no, nothing like that.

Every layer of an LLM runs separately and sequentially, and there isn't much data transfer between layers. If you wanted to, you could put each layer on a separate GPU with no real penalty. A single request will only run on one GPU at a time, so it won't go faster than a single GPU with a big RAM upgrade, but it won't go slower either.

replies(1): >>44579300 #
69. ◴[] No.44577180{3}[source]
70. zulban ◴[] No.44578149{3}[source]
Yes, people say all kinds of things.
replies(1): >>44579459 #
71. oblio ◴[] No.44579300{14}[source]
Interesting, thank you for the feedback, it's definitely worth looking into!
72. thom ◴[] No.44579459{4}[source]
Apparently so.
73. guappa ◴[] No.44579898{3}[source]
We now have electric kickbikes… they aren't any better looking.
74. guappa ◴[] No.44579917{7}[source]
He was doing neither. He was using a 3rd party API and has no idea what it costs them to actually run it.