True; but how is that not expected?
We have more and more efficient communication than any point in history, this is a software solution with a very low bar to the building blocks and theory.
Software should be expected to move faster and faster.
I’m not sure who is wishing it away. No one wanted to wish away search engines, or dictionaries or advice from people who repeat things they read.
It’s panic top to bottom on this topic. Surely there are some adults around that can just look at a new thing for what it is now and not what it could turn into in a fantasy future?
Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?
But if you told them about social media, I think the story would be different. Some would think it would be great, some would see it as dystopian, but neither would be right.
We don't have to imagine, though. All three of these things have captured people's imaginations since before the 50's. It's just... AI has always been closer to imagined concepts of social media more than it has been to highly advanced communication devices.
2. Segways were just ahead of their time: portable lithium-ion powered urban personal transportation is getting pretty big now.
I got to try one once. It was very underwhelming...
No, I don't remember it like that. Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?
You don't. I love the argument ad absurdum more than most but you've taken it a teensy bit too far.
--- start quote ---
Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.
--- end quote ---
Mass adoption is not inevitable. Everyone will drop this "faster harder" tech like a hot potato when (not if) it fails to result in meaningful profits.
Oh, there will be forced mass adoption alright. Have you tried Gemini? Have you? Gemini? Have you tried it? HAVE YOU? HAVE YOU TRIED GEMINI?!!!
Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?
Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.
It would be utopian, like how people thought of social media in the oughts. It's a common pattern through human history. People lack the imagination to think of unintended side effects. Nuclear physics leading to nuclear weapons. Trains leading to more efficient genocide. Media distribution and printing press leading to new types of propaganda and autocracies. Oil leading to global warming. IT leading to easy surveillance. Communism leading to famine.
Some of that utopianism is wilful, created by the people with a self-interested motive in seeing that narrative become dominant. But most of it is just a lack of imagination. Policymakers taking the path of local least resistance, seeking to locally (in a temporal sense) appease, avoiding high-risk high-reward policy gambits that do not advance their local political ambitions. People being satisfied with easy just-so stories rather than humility and a recognition of the complexity and inherent uncertainty of reality.
AI, and especially ASI, will probably be the same. The material upsides are obvious. The downsides harder to imagine and more speculative. Most likely, society will be presented with a fait accompli at a future date, where once the downsides are crystallized and real, it's already too late.
I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.
Given the absolutely insane hard resource requirements for these systems that are kind of useful, sometimes, in very limited contexts, I don’t believe its adoption is inevitable.
Maybe one of the reasons for that is that I work in the energy industry and broadly in climate tech. I am painfully aware of how much we need to do with energy in the coming decades to avoid civilizational collapse, and how difficult all of that will be, without adding all of these AI data centers into the mix. Without several breakthroughs in one or more hard engineering disciplines, the mass adoption of AI is not currently physically possible.
The Segway always had a high barrier to entry. Currently for ChatGPT you don't even need an account, and everyone already has a Google account.
That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.
https://www.youtube.com/watch?v=SK362RLHXGY
Hey, it still beats what you go through at the airports.
It is even cheaper to serve an LLM answer than call a web search API!
Zero chance all the users evaporate unless something much better comes along, or the tech is banned, etc...
When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.
> It is even cheaper to serve an LLM answer than call a web search API
These, uhhhh, these are some rather extraordinary claims. Got some extraordinary evidence to go along with them?
Your ending sentence is certainly correct: we aren't imagining the effects of AI enough, but all of your examples are not only unconvincing, they're easy ways to ignore what downsides of AI there might be. People can easily point to how trains have done a net positive in the world and walk away from your argument thinking AI is going to do the same.
Counterpoint: That's how I feel about ebikes and escooters right now.
Over the weekend, I needed to go to my parent's place for brunch. I put on my motorcycle gear, grabbed my motorcycle keys, went to my garage, and as I was about to pull out my BMW motorcycle (MSRP ~$17k), looked at my Ariel ebike (MSRP ~$2k) and decided to ride it instead. For short trips they're a game changing mode of transport.
People like you grumbled when their early car broke down in the middle of a dirt road in the boondocks and they had to eat grass and shoot rabbits until the next help arrived. "My horse wouldn't have broken down", they said.
Technologies mature over time.
Anecdotally thanks to hardware advancements the locally-run AI software I develop has gotten more than 100x faster in the past year thanks to Moore's law
It is really the same kind of thing.. but the model is "smarter" then a junior engineer usually. You can say something like "hmm.. I think an event bus makes sense here" Then the LLM will do it in 5 seconds. The problem is that there are certain behavioral biases that require active reminding (though I think some MCP integration work might resolve most of them, but this is just based on the current Claude Code and Opus/Sonnet 4 models)
LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.
Calling LLMs "hype" is an example of cope, judging facts based on what is hoped to be true even in the face of overwhelming evidence or even self-evident imminence to the contrary.
I know people calling "hype" are motivated by something. Maybe it is a desire to contain the inevitable harm of any huge rollout or to slow down the disruption. Maybe it's simply the egotistical instinct to be contrarian and harvest karma while we can still feign to be debating shadows on the wall. I just want to be up front. It's not hype. Few people calling "hype" can believe that this is hype and anyone who does believes it simply isn't credible. That won't stop people from jockeying to protect their interests, hoping that some intersubjective truth we manufacture together will work in their favor, but my lord is the "hype" bandwagon being dishonest these days.
They did. I am talking about the physicists who preceded these particular physicists.
> And Communism didn't lead to famine - Soviet and Maoist policies did. Communism was immaterial to that.
The particular brand of agrarian communism and agricultural collectivization resulting from this subtype of communism did directly cause famine. The utopian revolutionaries did not predict this outcome before hand.
> People can easily point to how trains have done a net positive in the world and walk away from your argument thinking AI is going to do the same.
But that is one plausible outcome. Overall a net good, but with significant unintended consequences and high potential for misuse that is not easily predictable to people working on the technology today.
> It would be utopian
People wrote about this. We know the answer! I stated this, so I'm caught off guard as it seems you are responding to someone else, but at the same time, to me.London Times, The Naked Sun, Neuromancer, The Sockwave Rider, Stand on Zanzibar, or The Machine Stops. These all have varying degrees of ideas that would remind you of social media today.
Are they all utopian?
You're right, the downsides are harder to imagine. Yet, it has been done. I'd also argue that it is the duty of any engineer. It is so easy to make weapons of destruction while getting caught up in the potential benefits and the interesting problems being solved. Evil is not solely created by evil. Often, evil is created by good men trying to do good. If only doing good was easy, then we'd have so much more good. But we're human. We chose to be engineers, to take on these problems. To take on challenging tasks. We like to gloat about how smart we are? (We all do, let's admit it. I'm not going to deny it) But I'll just leave with a quote: "We choose to go to the Moon in this decade and do the other things not because they are easy, but because they are hard"
The types of tasks I have been putting Claude Code to work on are iterative changes on a medium complexity code base. I have an extensive Claude.md. I write detailed PRDs. I use planning mode to plan the implementation with Claude. After a bunch of iteration I end up with nicely detailed checklists that take quite a lot of time to develop but look like a decent plan for implementation. I turn Claude (Opus) loose and religiously babysit it as it goes through the implementation.
Less than 50% of the time I end up with something that compiles. Despite spending hundreds of thousands of tokens while Claude desperately throws stuff against the wall trying to make it work.
I end up spending as much time as it would have taken just to write it to get through this process AND then do a meticulous line by line review where I typically find quite a lot to fix. I really can't form a strong opinion about the efficiency of this whole thing. It's possible this is faster. It's possible that it's not. It's definitely very high variance.
I am getting better at pattern matching on things AI will do competently. But it's not a long list and it's not much of the work I actually do in a day. Really the biggest benefit is that I end up with better documentation because I generated all of that to try and make the whole thing actually work in the first place.
Either I am doing something wrong, the work that AI excels at looks very different than mine, or people are just lying.
The first principle is that you must not fool yourself and you are the easiest person to fool.
There is something of a balance. Certainly, Social Media does some good and has the potential to do more. But also, it certainly has been abused. Maybe so much that it become difficult to imagine it ever being good.We need optimism. Optimism gives us hope. It gives us drive.
But we also need pessimism. It lets us be critical. It gives us direction. It tells us what we need to fix.
But unfettered optimism is like going on a drive with no direction. Soon you'll fall off a cliff. And unfettered pessimism won't even get you out the door. What's the point?
You need both if you want to see and explore the world. To build a better future. To live a better life. To... to... just be human. With either extreme, you're just a shell.
went to some tech meetups earlier this year and when the topic came up, one of the organizers politely commented to me that pretty much everything said about ai has been said. the only discussions worth having are introductions to the tools then leaving an individual to decide for themselves whether or not its useful to them. those introductions should be brief and discussions of the applications are boring
back in the bar scene days discussing work, religion, and politics were social faux pas. im sensing ai is on that list now
"AI" was introduced as an impressive parlor trick. People like to play around, so it quickly got popular. Then companies started force-feeding it by integrating it into every existing product, including the gamification and bureaucratization of programming.
Most people except for the gamers and plagiarists don't want it. Games and programming fads can fall out of fashion very fast.
As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.
VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.
That's a big difference
I'm kind of surprised, certainly there is a locality bias and an action bias to the model by default, which can partially be mitigated by claude.md instructions (though it isn't great at following if you have too much instruction there). This can lead to hacky solutions without additional meta-process.
I've been experimenting with different ways for the model to get the necessary context to understand where the code should live and the patterns it should use.
I have used planning mode only a little (I was just out of the country for 3 weeks and not coding, so it has only just become available before I left, but it wasn't a requirement in my past experience)
The only BIG thing I want from Claude Code right now is a "Yes, and.." for accepting code edits where I can steer the next step while accepting the code.
> You don't think those who worked on the Manhattan Project knew the deadly potential of the atom bomb?
I think you have missed an important part of history. That era changed physics. That era changed physicists. It was a critical turning point. Many of those people got lost in the work. The thrill of discovery, combined with the fear of war and an enemy as big as imagination.Many of those who built the bomb became some of the strongest opponents. They were blinded by their passion. They were blinded by their fears. But once the bomb was built, once the bomb was dropped, it was hard to stay blind.
I say that this changed physicists, because you can't get a university degree without learning about this. They talk about the skeletons in the closet. They talk about how easy it is to fool yourself. Maybe it was the war and the power of the atom. Maybe it was the complexity of "new physics". Maybe it happened because the combination.
But what I can tell you, is that it became a very important lesson. One that no one wants to repeat:
it is not through malice, but through passion and fear that weapons of mass destruction are made.
This is why I've been extremely suspicious of the monopolisation of the LLM services by single business/country. They may well be loosing billions on training huge models now. But once the average work performance shifts up sufficiently so as to leave "non AI enhanced" by the wayside we will see huge price increases and access to these AI tools being used as geopolitics leverage.
Oh, you do not want to accept "the deal" where our country can do anything in your market and you can do nothing? Perhaps we put export controls on GPT5 against your country. And from then on its as if they disconnected you from the Internet.
For this reason alone local AI is extremely important and certain people will do anything possible to lock it in a datacenter (looking at you Nvidia).
I can totally go about my life pretending Segway doesn't exist, but I just can't do that with ChatGPT, hence why the author felt compelled to write the post in the first place. They're not writing about Segway, after all.
We use probably all of Google's products at work, and sadly the comment is not even a joke. Every single product and page still shows a Gemini upsell even after you've already dismissed it fifteen times
If they don't become better we are left with a big but not huge change. Productivity gains of around 10 to 20 percent in most knowledge work. That's huge for sure but in my eyes the internet and pc revolution before that were more transformative than that. If LLMs become better, get so good they replace huge chunks of knowledge workers and then go out to the physical world then yeah ...that would be the fastest transformation of the economy in history imo.
When I point it at my projects though, the outcomes are much less reliable and often quite frustrating.
https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...
This one claims 20m paying subscribers, which is not a lot. Mr. Beast has 60m views on a single video.
A lot of weekly active users will use it once a week, and a large part of that may be "hate users" who want to see how bad/boring it is, similar to "hatewatching" on YouTube.
With the smartphone in 2009, the web in the late 90s or LLMs now, there's no element of "trust me, bro" needed. You can try them yourself and see how useful they are. You didn't need to be a tech visionary to predict the future when you're buying stuff from Amazon in the 90s, or using YouTube or Uber on your phone in 2009, or using Claude Code today. I'm certainly no visionary, but both the web and the smartphone felt different from everything else at the time, and AI feels like that now.
As someone who doesn't actually want or use AI, I think you are extremely wrong here. While people don't necessarily care about the forced integrations of AI into everything, people by and large want AI massively.
Just look at how much it is used to do your homework, or replaces Wikipedia & Google in day to day discussions. How much it is used to "polish" emails (spew better sounding BS). How much it is used to generate meme images instead of trawling the web for them. AI is very much a regular part of day to day life for huge swaths of the population. Not necessarily in economically productive ways, but still very much embedded and unlikely to be removed - especially since it's current capabilities today are already good enough for these purposes, they don't need smarter AI, just keep it cheap enough.
However if you can quickly read code, see and succintly communicate the more optimal solution, you can easily 10x-20x your ability to code.
I'm begining to believe it may primarily come down to having the vocabulary and linguistic ability to succintly and clearly state the gaps in the code.
You had me until you basically said, "and for my next trick, I am going to make up stories".
Projecting is what happens when someone doesn't understand some other people, and from that somehow concludes that they do understand those other people, and feels the need to tell everyone what they now "know" about those people, that even those people don't know about themselves.
Stopping at "I don't understand those people." is always a solid move. Alternately, consciously recognizing "I don't understand those people", followed up with "so I am going to ask them to explain their point of view", is a pretty good move too.
But I want to point out that going from CPU to TPU is basically the opposite of a Moore's law improvement.
(A mid to high end GPU can get similar or better performance but it's a lot harder to get more RAM.)
I haven't seen that at all. I've seen a whole lot of top-down AI usage mandates, and every time what sounds like a sensible positive take comes along, it turns out to have been written by someone who works for an AI company.
LLM are more useful than Segway, but it can still be overhyped because the hype is so much larger. So its comparable, as you say LLM is so much more hyped doesn't mean it can't be overhyped.
The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.
The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.
Musk's 2014/2015 promises are arguably delivered, here in 2025 (took a little more than '1 month' tho), but the promises starting in 2016 are somewhere between 'undelivered' and 'blatant bullshit'.
Do you believe you've managed to solve the most common wisdom in the software engineering industry? That reading code is much harder than writing it? If you have, then you should write up a white paper for the rest of us to follow.
Because every time I've seen someone say this, it's from someone that doesn't actually read the code they're reviewing.
5060 Ti 16GB, $450
If you want more than 16GB, that's when it gets bad.
And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.
So? The blog notes that if something is inevitable, then the people arguing against it are lunatics, and so if you can frame something as inevitable then you win the rhetorical upper-hand. It doesn't -- however -- in any way attempt to make the argument that LLMs are _not_ inevitable. This is a subtle straw man: the blog criticizes the rhetorical technique of inevitabilism rather than engaging directly with whether LLMs are genuinely inevitable or not. Pointing out that inevitability can be rhetorically abused doesn't itself prove that LLMs aren't inevitable.
No, they wouldn't. The '80s saw obscene investment in AI (then "expert systems") and yet nobody's mom was using it.
> It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.
It's especially hard to compare since it's often those financially stable businesses doing said investments (Microsoft, Google, etc).
---
Aside: you know "the customer is always right [in matters of taste]"? It's been weirdly difficult getting bosses to understand the brackets part, and HN folks the first part.
But it's NOT a person when it's time to 'tell the AI' that you have its puppy in a box filled with spikes and for every mistake it makes you will stab it with the spikes a little more and tell it the reactions of the puppy. That becomes normal, if it elicits a slightly more desperate 'person' out of the AI for producing work.
At which point the meat-people who've taught themselves to normalize this workflow can decide that opponents of AI are clearly so broken in the head as to constitute non-player characters (see: useful memes to that effect) and therefore are NOT people: and so, it would be good to get rid of the non-people muddying up the system (see: human history)
Told you it gets worse. And all the while, the language models are sort of blameless, because there's nobody there. Torturing an LLM to elicit responses is harming a person, but it's the person constructing the prompts, not a hypothetical victim somewhere in the clouds of nobody.
All that happens is a human trains themselves to dehumanize, and the LLM thing is a recipe for doing that AT SCALE.
Great going, guys.
In times when people are being more honest. There's a huge amount of perverse incentive to chase internet points or investment or whatever right now. You don't get honest answers without reading between the lines in these situations.
It's important to do because after a few rounds of battleship, when people get angry, they slip something out like, "Elon Musk" or "big tech" etc and you can get a feel that they're angry that a Nazi was fiddling in government etc, that they're less concerned about overblown harm from LLMs and in fact more concerned that the tech will wind up excessively centralized, like they have seen other winner-take-all markets evolve.
Once you get people to say what they really believe, one way or another, you can fit actual solutions in place instead of just short-sighted reactions that tend to accomplish nothing beyond making a lot of noise along the way to the same conclusion.
How cheap is inference, really? What about 'thinking' inference? What are the prices going to be once growth starts to slow and investors start demanding returns on their billions?
It is for a B2C with $20 as its lowest price point.
>A lot of weekly active users will use it once a week
That's still a lot of usage.
>and a large part of that may be "hate users" who want to see how bad/boring it is, similar to "hatewatching" on YouTube.
And they're doing this every week consistently ? Sorry but that's definitely not a 'large part' of usage.
ChatGPT is so useful, people without any technology background WANT to use it. People who are just about comfortable with the internet, see the applications and use it to ask questions (about recipes, home design, solving small house problems, etc).
Working with production code is basically jumping straight to the ball of mud phase, maybe somewhat less tangled but usually a much much larger codebase. Its very hard to describe to an LLM what to even do since you have such a complex web of interactions to consider in most mature production code.
The unprofitability of the frontier labs is mostly due to them not monetizing the majority of their consumer traffic at all.
"Novelty" comes to mind.
How are you measuring this? Are you actually saying that you _feel_ slightly more productive?
The destruction of the American government today are a direct result of social media supercharging existing negative internal forces that date back to the mid 20th century. The past six months of conservative rule has already led to six-figure deaths across the globe. That will eventually be eight to nine figures with the full impact of the healthcare and immigration devastation inside the United States itself. Far worse than Hiroshima.
Took a decade or two, but you can lay the blame at Facebook and Twitter's doorsteps. The US will never properly recover, though it's possible we may restore sanity to governance at some point.
Note that I'm not even going to bother arguing against your point and instead resort to personal attacks,because I believe it would be a waste of time to argue against people with poor judgment.
I think it is funny how people act like it is a new problem. If the AI is having trouble with a "ball of mud", don't make mud balls (or learn to carve out abstractions). This cognitive load is impacting everyone working on that codebase. Skilled engineers enable less skilled engineers to flourish by creating code bases where change is easy because the code is modular and self-contained.
I think one sad fact is many/most engineers don't have the skills to understand how to refactor mature code to make it modular. This also means they can't communicate to the AI what kind of refactoring they should make.
Without any guidance Claude will make mud balls because of two tendencies, the tendency to put code where it is consumed and the tendency to act instead of researching.
There are also some second level tendencies that you also need to understand, like the tendency to do a partial migration when changing patterns.
These tendencies are not even unique to the AI, I'm sure we have worked with people like that.
So to counteract these tendencies, just apply your same skills at reading code and understanding when an abstraction is leaky or a method doesn't align with your component boundary. Then you too can have AI building pretty good componentized code.
For example in my pet current project I have a clear CQRS api, access control proxies, repositories for data access. Clearly defined service boundaries.
It is easy for me to see when the AI for example makes a mistake like not using the data repository or access control because it has to add an import statement and dependency that I don't want. All I have to do is nudge it in another direction.
We saw the same thing with blockchain. We started seeing the most ridiculous attempts to integrate blockchain, by companies where it didn't even make any sense. But it was all because doing so excited investors and boosted stock prices and valuations, not because consumers wanted it.
Maybe it's more like Pogs.
Relative to its siblings, things have gotten worse. A GTX 970 could hit 60% of the performance of the full Titan X at 35% of the price. A 5070 hits 40% of a full 5090 for 27% of the price. That's overall less series-relative performance you're getting, for an overall increased price, by about $100 when adjusting for inflation.
But if you have a fixed performance baseline you need to hit, as long as tech gets improving, things will eventually be cheaper for that baseline. As long as you aren't also trying to improve in a way that moves the baseline up. Which so far has been the only consistent MO of the AI industry.
With all the insane exposure and downloads how many people cant even be convinced to pay 20$/month for it ? The value proposition to most people is that low. So you are basically betting on LLMs making a leap in performance to pay for the investments.
…is exactly inevitablist framing. This claims perfect knowledge of the future based on previous uncertain knowledge of the future (which is now certain). You could have been making the same claims about the inevitability of sporks in the late 19th century and how cutlery drawers should adapt to the inevitable single-utensil future.
We do have self-driving taxis now, and they are so good that people will pay extra to take them. It's just not Tesla cars doing it.
And those systems were never "commodified" - your average mom is forcefully exposed to LLMs with every google search, can interact with LLMs for free instantly anywhere in the world - and we're comparing to a luxury product for nerds basically?
Not to forget that those massive companies are also very heavy in advertising - I don't think your average mom in the 80s heard of those systems multiple times a day, from multiple aquaintances AND social media and news outlets.
You cannot effectively employ a team of twenty junior developers if you have to review all of their code (unless you have like seven senior developers, too).
But this isn't a point that needs to be debated. If it is true that LLMs can be as effective as a team of 20 junior developers, then we should be seeing many people quickly producing software that previously required 20 junior devs.
> but the model is "smarter" then a junior engineer usually
And it is also usually worse than interns in some crucial respects. For example, you cannot trust the models to reliably tell you what you need to know such as difficulties they've encountered or important insights they've learnt and understand they're important to communicate.
I think the core issue is separating the perception of value versus actual value. There have been a couple of studies to this effect, pointing to a misalignment towards overestimating value and productivity boosts.
One reason this happens imo, is because we sequester a good portion of the cognitive load of our thinking to the latter parts of the process so when we are evaluating the solution we are primed to think we have saved time when the solution is sufficiently correct, or if we have to edit or reposition it by re-rolling, we don't account for the time spent because we may feel we didn't do anything.
I feel like this type of discussion is effectively a top topic every day. To me, the hype is not in the utility it does have but in its future utility. The hype is based on the premise that these tools and their next iteration can and will make all knowledge-based work obsolete, but crucially, will yield value in areas of real need; cancer, aging, farming, climate, energy and etc.
If these tools stop short of those outcomes, then the investment all of SV has committed to it at this point will have been over invested and
Saying well, we got 500 nuclear power plants is like saying “well, we got excellent `npx create-app` style templates from AI. That’s pretty huge impact. I don’t know a single project post 2030 that didn’t start as an AI scaffolded project. That’s pretty huge dude”
Notice how I did that too?
Something I struggle to internalise, even though I know it in theory.
Customers can't be told they're wrong, and the parenthetical I've internalised, but for non-taste matters they can often be so very wrong, so often… I know I need to hold my tongue even then owing to having merely nerd-level charisma, but I struggle to… also owing to having merely nerd-level charisma.
(And that's one of three reasons why I'm not doing contract work right now).
Back in 2009, I was expecting normal people to be able to just buy a new vehicle with no steering wheel required or supplied by 2019, not for a handful of geo-fenced taxis that slowly expanded over the 6 years from 2019 to 2025.
This seems super duper expensive and not really supported by the more reasonably priced Nvidia cards, though. SLI is deprecated, NVLink isn't available everywhere, etc.
And nothing I've seen about recent GPUs or TPUs, from ANY maker (Nvidia, AMD, Google, Amazon, etc) say anything about general speedups of 100x. Heck, if you go across multiple generations of what are still these very new types of hardware categories, for example for Amazon's Inferentia/Trainium, even their claims (which are quite bold), would probably put the most recent generations at best at 10x the first generations. And as we all know, all vendors exaggerate the performance of their products.
Every layer of an LLM runs separately and sequentially, and there isn't much data transfer between layers. If you wanted to, you could put each layer on a separate GPU with no real penalty. A single request will only run on one GPU at a time, so it won't go faster than a single GPU with a big RAM upgrade, but it won't go slower either.
But that isn't the argument. The article isn't arguing about something failing or succeeding based on merit, they seem to have already accepted strong AI has "merit" (in the utility sense). The argument is that despite the strong utility incentive, there is a case to be made that it will be overall harmful so we should be actively fighting against it, and it isn't inevitable that it should come to full fruition.
That is very different than VR. No-one was trying to raise awareness of the dangers of VR and fight against it. It just hasn't taken off because we don't really like it as much as people thought we would.
But for the strong AI case, my argument is that it is virtually inevitable. Not in any predestination sense, but purely because the incentives for first past the post are way too strong. There is no way the world is regulating this away when competitive nations exist. If the US tries, China won't, or vice versa. It's an arms race, and in that sense is inevitable.