If you destroy the GPU, you can write it off as a loss, which reduces your taxable income.
Its possible you could come out ahead by selling everything off, but then you'd have to pay expensive people to manage the sell off, logistics, etc. What a mess. Easier to just destroy everything and take the write-off.
-- > From 2013 to 2020, cloud infrastructure capex rose methodically—from $32 billion to $119 billion. That's significant, but manageable. Post-2020? The curve steepens. By 2024, we hit $285 billion. And in 2025 alone, the top 11 cloud providers are forecasted to deploy a staggering $392 billion—MORE than the entire previous two years combined.
https://www.wisdomtree.com/investments/blog/2025/05/21/this-...
"What do you mean the women in this game have proportions roughly equivalent to what's actually possible in nature?!?!"
It is a giant pain to sell off this gear if you are using in-house folks to do so. Usually not worth it, and why things end up trashed as you state. If I have a dozen 10 year old servers to get rid of - it's usually not worth anyone's time or energy to list them for $200 on ebay and figure out shipping logistics.
However, at scale the situation and numbers change - you can call in an equipment liquidator who can wheel out 500 racks full of gear at a time and you get paid for their disposal on top of it. Usually a win/win situation since you no longer have expensive people trying to figure out who to call to get rid of it, how to do data destruction properly, etc. This usually is a help to the bottom line in almost all cases I've seen, on top of it saving internal man-hours.
If you're in "failed startup being liquidated for asset value" territory, then the receiver/those in charge typically have a fiduciary duty to find the best reasonable outcome for the investors. It's rarely throwing gear with residual value in the trash. See: used Aeron chair market.
https://en.wikipedia.org/wiki/Fermi_paradox#Hypothetical_exp...
"Alien species may isolate themselves in virtual worlds"
To me, this all sounds like an “end-of-the-world” nihilistic wet dream, and I don’t buy the hype.
Is it’s just me?
We would have to 100x medical research spending before it was clearly overdone.
Because the only thing that gets the executive class hornier than new iPhone-tier products is getting to layoff tons of staff. It sends the stock price through the roof.
It follows from there that an iPhone-tier product that also lets them layoff tons of staff would be like fucking catnip to them.
I’m paid about 16x an electronics engineer. Salaries in IT are completely unrelated to the person’s effort compared to other white collar jobs. It would take an entire career to some manager to reach what I made after 5 years. I may be 140IQ but I’m also a dumbass in social terms!
I had the same thought you did back then. If I could build a company with 3 people pulling a couple million of revenue per year, what did that mean to society when the average before that was maybe a couple dozen folks?
Technology concentrates gains to those that can deploy it - either through knowledge, skill, or pure brute force deployment of capital.
whereas my experience describing my problem and actually asking the AI is much, much smoother.
I'm not convinced the "LLM+scaffolding" paradigm will work all that well. sanity degrades with context length, and even the models with huge context windows don't seem to use it all that effectively. RAG searches often give lackluster results. the models fundamentally seem to do poorly with using commands to accomplish tasks.
I think fundamental model advances are needed to make most things more than superficially automatable: better planning/goal-directed behavior, a more organic connection to RAG context, automatic gym synthesis, and RL-based fine tuning (that holds up to distribution shift.)
I think that will come, but I think if LLMs plateau here they won't have much more impact than Google Search did in the '90s.
For the same reason people are obsessed with replacing all blue-collar jobs. Every cent that a company doesn't have to spend on its employees is another cent that can enrich the company's owners.
You're not going to fix lifestyle diseases with drugs, and lifestyle diseases are the leading cause of death
It's difficult to have much empathy for the "learn to code" crowd who seemingly almost got a sense of joy out of watching those jobs and lifestyles get destroyed. Almost some form of childhood high school revenge fantasy style stuff - the nerd finally gets one up on the prom king. Otherwise I'm not sure where the vitriol came from. Way too many private conversations and overheard discussion in the office to make me think these were isolated opinions.
That said, it's not everyone in tech. Just a much larger percentage than I ever thought, which is depressing to think about.
It's certainly been interesting to watch some folks who a decade ago were all about "only skills matter, if you can be outcompeted by a robot you deserve to lose your job" make a 180 on the whole topic.
What's easier, educate your people and feed them well to build a strong and healthy nation OR let them rot and shovel billions to pharma corps in the hope of finding a magic cure?
There's no such thing as taking people's jobs, nobody and nothing is going to take your job except for Jay Powell, and productivity improvements cause employment to increase not decrease.
Humans have so far completely failed to develop any drug with minimal side effects to cure lifestyle diseases; it's magical to think AI can definitely do it.
That is what allowed our current lifestyles. It is good thing. Now it is just coming to next area.
e.g. if OpenAI is responsible for any damages caused by ChatGPT then the service shuts down until you waive liability and then it's back up. Similarly if companies are responsible for the chat bots they deploy then they can buy insurance or put up guard rails around the chat bot, or not use it.
> shovel billions to pharma corps in the hope of finding a magic cure?
What do you mean finding? We already found it (GLP-1 inhibitors). Ozempic is even owned by a nonprofit (Novo Nordisk). See, everything's fine.
Can’t believe I have to state the obvious and say that is only a potential gain if the power/cooling is from renewable sources. But I do
Oh, in this case GP seems to be including sunscreen as a treatment for lifestyle diseases. Pretty sure those don't have side effects, but Americans don't get the good ones.
With AI it is white collar work.
You're correct. But it doesn't matter. Remember the San Francisco protests against tech? People will kill a golden goose if it's shinier than their own.
If you want to understand our current moment, I would urge you to study that history.
Maybe it's my post-communist background though and not relevant for the rest of the world
I’d give building with sonnet 4 a fair shot. It’s really good, not accurate all the time but pretty good.
Like you have a brilliant idea, but unfortunately don't have any hard skills. Now you don't have to pay enormous sums of money to geeks and have to suffer them to make it come true. Truly a dream!
And I was explaining that I work in tech, so I live in the future to some degree, but that ultimately, even with HIPAA and other regulations, there's too much of a gain here for it not to be deployed eventually, And those people in their time are going to be used differently when that happens. I was speculating that it could be used for interviews as well, but I think I'm less confident there.
Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...
The reason to be excited economically for this is if it happens it will be massively deflationary. Pretending CEOs are just going to pocket the money is economically stupid.
Being able to use a super intelligence has been a long time dream too.
What is depressing is the amount of tech workers who have no interest in technological advancement.
It's self-defeating but predictable. (Hence why the protests were tolerated to backed by NIMBY interests.)
My point is the same nonsense can be applied to someone not earning a tech wage celebrating tech workers getting replaced by AI. It makes them poorer, ceteris paribus. But they may not understand that. And the few that do may not care (or may have a way to profit off it, directly or indirectly, such that it's acceptable).
I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.
What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.
So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.
A number of them seem to have skyrocketed with quality of life and personal wealth. I suspect my ancestors were skinny not because they were educated on eating well but because they lacked the same access to food we have in modern society, especially super caloric ones. I don't super want to go back to an ice cream scarce world. Things like meat consumption are linked to colon cancer and most folk are unwilling to give that up or practice meat-light diets. People generally like smoking! Education campaigns got that down briefly but it was generally not because people didn't want to smoke, it's because they didn't want cancer. Vaping is clearly popular nowadays. Alcohol, too! The WHO says there is no safe amount of alcohol consumption and attributes lots of cancer to even light drinking. I suspect people would enjoy being able to regularly have a glass of wine or beer and not have it cost them their life.
But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.
Unless GPUs are like post-Covid used cars you're going to sell them at a loss which can be written off. Write-offs don't have to involve destroying the asset. I don't know where you got that idea.
We're all far closer to poor than we are to having enough capital to live off of efficiency increases. AI is the last thing the capitalist class requires to finally throw of the shackles of humanity, of keeping around the filthy masses for their labor.
Dota, league, he’ll - Roblox, twitch, discord - have some of the most data on how angry humans are when they play vidya.
Producing things cheaper sounds great, but just because its produced cheaper doesn't mean it is cheaper for people to buy.
And it doesn't matter if things are cheap if a massive number of people don't have incomes at all (or even a reasonable way to find an income - what exactly are white collar professionals supposed to do when their profession is automated away, if all the other professions are also being automated away?)
Sidenote btw, but I do think it's funny that the investor class doesn't think AI will come for their role..
To me the silver lining is that I don't think most of this comes to pass, because I don't think current approaches to AGI are good enough. But it sure shows some massive structural issues we will eventually face
> They seem to be totally convinced that this will happen.
The two groups of people are not same. I for example belong to the 2nd but not the 1st. If you have used the current gen LLM coding tools you will realize they have gotten they are scary good.
After the dotcom crash, much of this infrastructure became distressed assets that could be picked up for peanuts. This fueled a large number of new startups in the aftermath that built business models figuring out how to effectively leverage all of this dead fiber when you don't have to pay the huge capital costs of building it out yourself. At the time, you could essentially build a nationwide fiber network for a few million dollars if you were clever, and people did.
These new data centers will find a use, even if it ends up being by some startup who picks it up for nothing after a crash. This has been a pattern in US tech for a long time. The carcass of the previous boom's whale becomes cheap fuel for the next generation of companies.
And since when do business executives NOT pocket the money? Pretty much the only exception is when they reinvest the savings into the business, for more growth, but that reinvestment and growth usually is only something the rest of us care about if it involves hiring..
Personally, however, I would find it possibly even more depressing to spend my day doing a job that has economic value only because some regulation prevents it being done more efficiently. At that point I'd rather get the money anyway and spend the day at the beach.
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.
In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
When a few years ago I moved from Eastern Europe (where I had 1GB/s to my apartment for years) to the UK I was surprised that "the best" internet connection I was able to get was about 40MBit/s phone line. But it's a small town, and during past years even we have fiber up to 2GB/s now.
I'm surprised US still has issues that you mentioned. Have you considered Starlink(fuck Musk, but the product is decent)/alternatives?
https://sloanreview.mit.edu/article/the-multiplier-effect-of...
There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question
If you replace lawyers with AI, poor people will be able to take big companies to court and defend themselves against frivolous lawsuits, instead of giving in and settling. If you replace doctors, the cost of medicine will go down dramatically, and so will waiting times. If you replace financial advisors, everybody will have their money managed in an optimal way, making them richer and less likely to make bad financial decisions. If you replace creative workers, everybody will have access to the exact kind of music, books, movies and video games they want, instead of having to settle for what is available. If you automate away delivery and drivers (particularly with drones), the price of prepared food will fall dramatically.
This doesn't even require any "conspiracy" among CEOs, just people with a vested interest in AI hype who act in that interest, shaping the type of content their organizations will produce. We saw something lessor with the "return to office" frenzy just because many CEOs realized a large chunk of their investment portfolio was in commercial real estate. That was only less hyped because I suspect there were larger numbers of CEOs with an interest in remaining remote.
Outside of the tech scene, AI is far less hyped and in places where CEOs tend to have little impact on the media it tends to be resisted rather than hyped.
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.
The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.
One is, of course, the size of the country, but that's hardly an "excuse." It does contribute though.
The other big reason is lack of competition in the ISP space, and this is compounded by a distinctly American captured system where the owners/operators of the "public" utility poles shut out new entrants and have no incentive to improve the situation.
Meanwhile the nationwide regulatory agencies have been stripped down and courts have de-toothed them, reducing likelihood of top-down reform, and often these sorts of problems inevitably end up running into the local and state government vs national government split that is far more severe in the US.
So it's one of those problems that is surprising to some degree, but when you read about things like public utility telephone poles captured by corporate interests, it's also distinctly ridiculous and American, and not surprising at all.
Believe it or not most SWE's and white collar workers in general don't get these perks especially outside the US where most firms have made sure tech workers in general are paid "standard wages" even if they are "good".
investors don't perform work (labour); they take capital risk. An ai do not own capital, and thus cannot "take" that role.
If you're asking about the role of a manager of investment, that's not an investor - that's just a worker, which can and would be automated eventually. Robo-advisors are already quite common. The owner of capital can use AI to "think" for them in choosing what capital risk to take.
And as for massive number of people who don't have income - i dont think that will come to pass either (just as you dont think AGI will come to pass). Mostly because the speed of these automation will decline as it's not that trivial to do so - the low hanging fruits would've been picked asap, and the difficult ones left will take ages to automate.
For one entire rented or owned house, it's just a call and a drill away.
I think they have the capability to do it, yes. Maybe it's not the best tool you can use- too expensive, or too flexible to focus with high accuracy on that single task- but yes you can definitely use LLMs to understand literary style and extract data from it. Depending on the complexity of the text I'm sure they can do jobs that BERT can't.
> they still can't fit anywhere in production
Not sure what do you mean for "production" but there's an enormous amount of people using them for work.
La ti da. My 50Mbps in an urban area doesn't even provide 10Mbps up.
> In an urban area too.
Funnily enough, my farmland has gigabit service.
But I, unfortunately, don't live there. Maybe some day I'll figure out how to afford to build a house on that land. But, until then, shitty urban internet it is, I guess.
Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.
I've said the same thing as you, that there is a LOT left to be done with current AI capabilities, and we've barely scratched the surface.
Imagine the reception that studies of female aggression get.
At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
It can already be "cheaply verified" in the sense that if you write a proof in, say, Lean, the compiler will tell if you if it's valid. The hard part is coming up with the proof.
It may be possible that some sort of AI at some stage becomes as good, or even better than, research mathematicians in coming up with novel proofs. But so far it doesn't look like it - LLMs seem to be able to help a little bit with finding theorems (e.g. stuff like https://leansearch.net/), but to my understanding they are rather poor beyond that.
That's true for many jobs. The only reason many people have a job is because of a variety of regulations preventing that job from being outsourced.
> At that point I'd rather get the money anyway and spend the day at the beach.
You won't get the money and spend the day at the beach; you'll starve to death.
If the questions were given as-is (without a human formalizing it) and the llm didnt need domain solvers, and the llm was not trained on it already (which happened with frontier math) - I would be impressed.
Based on the past history with frontier math [1][2] I remain skeptical. The skeptic in me says that this happens prior to big announcements (GPT-5) to create the hype.
Finally, this article shows that LLMs were just bluffing in the usamo 2025 [3].
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...
Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.
Also, the performance of LLMs was not even bronze [3].
Finally, this article shows that LLMs were just mostly bluffing [4].
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...
In any case, there's also a difference between the idea that it can be me or another person doing the same job, and maybe that person can be paid less because of their lower cost of living, but in the end they will put in the same effort as I do; and the idea that a tool can do the job effortlessly and the only reason I have to suffer over it is to justify a salary that has no reason to exist. Then, again, just force the company to pay me while allowing them to use whatever tool they want to get the job done.
Calculators didn't replace mathematicians, they replaced Computers (as an occupation). To the point that most people don't even know it used to be a job for people.
I say calculators but there is a blurry line between early electronic computers and calculators. Portable electronic calculators also replaced the slide rule, around the late 1970s, which had been the instrument of choice for engineers for around 350 years!
In fact the first programmers were mainly women because they came from a Computer background.