This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.
This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.
You must have not lived through the dot com boom. There was almost everything under the sun was being sold under a website that started with an "e". ePets, ePlants, eStamps, eUnderwear, eStocks, eCards, eInvites.....
>> away from long-term research toward commercial AI products and large language models - LLMs
This feels more like what I see every day: the people in charge desperately looking for some way - any way - to capitalize on the frenzy. They're not looking to fund research; they just want to get even richer. It's pets.ai this time.
I don't know if that's indicative of the market as a whole though. Zuck just seems really gutted they fell behind with Llama 4.
I’ve worked for multiple startups and I’ve watched startup job boards most of my career.
A lot of VC backed startups have a founder with a research background and are focused on providing out some hypothesis. I don’t see anything uncommon about this arrangement.
If you live near a University that does a lot of research it’s very common to encounter VC backed startups that are trying to prove out and commercialize some researcher’s experiment. It’s also common for those founders to spend some time at a FAANG or similar firm before getting VC funded.
From what I recall there were some biotech stocks in that era that do fit the bill.
After that, VC had become more like PE, investing in stuff that was working already but needed money to scale.
Why are these so different?
I wonder what changed. Does AI look like a safe bet? Or does every other bet seem to not have any reasonable return?
Biotech has been a YC darling. Was Ginkgo Bioworks not doing science experiments?
Clean energy was a big YC fad roughly 15 years ago. Billions were invested towards scientific research into biofuels, solar, etc.
The second reason is by how much it's going to be better in the end. Fusion has to compete with hydro, nuclear, solar and wind. It makes exactly the same energy, so the upside is already capped unlike with AI which brings something disruptive.
If consumption of slop turns out to be a novelty that goes away and enough time goes by without a leap to truly useful intelligence, the AI investment will go down.
I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.
If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.
This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.
If you think about Theranos, Magic leap, openai, anthropic they are all the same, one idea thats kinda plausible (well if you don't look too closely), have a slick demo, and well connected founders.
Much as a lot of people dislike LeCun (just look at the blind posts about him) he did run and setup a very successful team inside meta, well nominally at least.
The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.
Please keep the feedback coming!
Capital always chases the highest rate of return as well, and margins on energy production are tight. Margins on performing labor are huge.
Nobody had a way to do silicon transistor manufacturing at scale until the traitorous eight flipped Shockley the bird and took a $1.4M seed investment from Sherman Fairchild.
Big bets on uncertain technology is what tech is supposed to be about.
CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?
Manager: Now you "have" AI, release 10 features instead of 1 in the next month.
Devs: Spending 50% more working hours to make AI code "work" and deliver 10.
They're trying desperately to find profit in what so far has been the biggest boondoggle of all time.
There are trillions of labor dollars that can be replaced by software. The US alone has almost $12 trillion of labor annually.
If an AI company has a 10% shot of developing a product that can replace 10% of it, they are worth $120 billion in expected value. (These numbers are obviously just for illustration).
The unprecedented numbers are a simple function of the unprecedented market size. Nobody has ever had a chance of creating trillions of dollars of economic value in a handful of years before.
that's not how profits work. Companies don't get paid for the value they create but for the value they can capture, otherwise the ffmpeg people would already be trillionaires.
If you have a dozen companies making the same general purpose technology, not product, your only hope is being able to slap ads on top of it, which is why they're so keen on targeting consumers rather than trying to automate jobs.
The first ventures were funding voyages to a New World thousands of miles away, essentially a different planet as far as the people then were concerned.
Venture capital for a new B2B application is playing it safe as far as risk capital goes
The phenomenon you're seeing is well described here: "The Perfect AI Startup" (https://www.bloomberg.com/opinion/newsletters/2025-09-29/the...)
“It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.’”
Despite that vagueness, Murati raised $2 billion in funding...
Other terminal apps make different tradeoffs: for example Vim virtualizes scrolling, which has tradeoffs like the scroll physics feeling non-native and lines getting fully clipped. Other apps do what Claude Code does but don’t re-render scrollback, which avoids flickering but means the UI is often garbled if you scroll up.
Tech debt isn't something that even experienced large teams are immune to. I'm not a huge TypeScript fan, so seeing their choice to run their app on Node to me felt like a trade-off between development speed with the experience that the team had and at the expense of long-term growth and performance. I regularly experience pretty intense flickering and rendering issues and high CPU usage and even crashes but that doesn't stop me from finding the product incredibly useful.
Developing good software especially in a format that is relatively revolutionary takes time to get right and I'm sure whatever efforts they have internally to push forward a refactor will be worth it. But, just like in any software development, refactors are prone to timeline slips and scope creep. A company having tons of money doesn't change the nature of problem-solving in software development.
So it's made it easier for people to be taken advantage of at the grocery store etc.
When the bubble pops, and it’s very close to popping, there’s going to be a lot of burning piles of cash with no viable path to reviver that money.
I'd argue it's a failure of education or general lack of intelligence. The existence of a tool to speed the process up doesn't preclude people understanding the process.
I don't think this relates as closely to AI as you seem to. I'm simply better at building things, and doing things, with AI than without. Not just faster, better. If that's not true for you, you're either using it wrong or maybe you already knew how to do everything already - if so, good for you!
It gets lost on people in techcentric fields because Claude's at the forefront of things we care about, but Anthropic is basically unknown among the wider populace.
Last I'd looked a few months ago, Anthropic's brand awareness was in the middle single digits; OpenAI/ChatGPT was somewhere around 80% for comparison. MS/Copilot and Gemini were somewhere between the two but closer to Open AI than Anthropic.
tl;dr - Anthropic has a lot more to gain from awareness campaigns than the other major model providers do.
* Sign in with Apple on the website
* Buy subscriptions from iOS In App Purchases
* Remove our payment info from our account before the inevitable data breach
* Give paying subscribers an easy way to get actual support
As a frequent traveller I'm not sure if some of those features are gated by region, because some people said they can do some of those things, but if that is true, then that still makes the UX worse than the competitors.
However, I speak with a small subset of our most experienced engineers and they all love Claude Sonnet 4.5. Who knows if this lead will last.
I don't see what the basis for this is that wouldn't be equally true for OpenAI.
Anthropic's edge is that they very arguably have some of the best technology available right now, despite operating at a fraction of the scale of their direct competitors. They have to start building mind and marketshare if they're going to hold that position, though, which is the point of advertising.
That's because you're trying to make sense of it as a technology market. It's not. It's a resource extraction market, and the VCs are the ones running the logging operation. Their sole mission is to find a dependable way to strip a forest bare, and they've been using the same playbook for decades.
Those "science experiments" you're talking about? They aren't the product. They're the story, the sizzle. They are the disposable lighter used to start the fire; the VCs have no intention of keeping it lit forever. The real tool is the chainsaw, and the "science experiment" is the brand name printed on the side.
Think of it as clear-cutting. The dot-com bubble was one forest. The story then was that a company losing millions selling pet food online was a "new economy" giant because it had "eyeballs." That was the sales pitch for the chainsaw. VCs funded hundreds of these operations, created a frenzy, and took the most plausible-sounding ones public. The IPO wasn't a milestone; it was the moment they sold the timber and exited the forest, leaving the stumps and worthless pulp for the pension funds and retail investors.
The "long-term" part of their strategy isn't about the health of any single tree or company. It's about finding the next forest to clear-cut. After dot-coms, it was social media. Now, it's the AI forest. They aren't betting on AI; they're betting on their ability to sell the world on the idea that this particular forest is magical and will grow forever.
So you're right, what you're seeing is weird. But it's not a new kind of weirdness. It's the oldest story in finance. A bubble being inflated so the smart money can cash out, leaving everyone else to marvel at the fancy new chainsaw after the forest is already gone.
Maybe investing in all well-connected AI startups is safer than trying to pick the winners and losers?
This is the reason they haven't bothered to provide an image generator yet - because Chat users are not their focus.
It's not like this isn't following exactly the same hype cycle as every other technological transformation.
Maybe it's cheap insurance to invest in, say, LeCun just in case JEPA or the animal intelligence approach takes off, but if it does show significant signs of progress there'd also be opportunity to invest later, or in one of the dozen copycats that will emerge. In the end it'll be the giants like Google and Microsoft that will win.
Between inflation, fiscal capture, and the inane plethora of ridiculous financial vehicles that are used to move capital around these days, the argument could be made that the money was already funny. This is just the drop of the final veil, saying "well it's not like these numbers mean anything anymore. I do have enough yachts. Fuck it, see what you can do with it".
That's not all that new. Commercial fusion power startups are an example. I think the first one was General Fusion, founded in 2002. Today, there are around 50 of them. Every single one of those "remains a science experiment", and probably has much lower chance of success than some of the AI science experiments.
Of course, fusion startups have apparently "only" received about $10 bn in funding to date, so pale in comparison to the overall AI market. But if you just look at the AI "science experiments", it's possible the amounts would be comparable.
This is VCs FOMOing as global-economy-threatening levels of leverage are being bet on an AI transformation that, by even the most optimistic estimates, cannot achieve a tiny portion of the required ROI in the required time.
Of course OpenAI has tons of money and can branch off in all kind of directions (image, video, n8n clone, now RAG as a service).
In the end I think they will all be good enough and both Anthropic and OpenAI lead will evaporate.
Google will be left to win because they already have all the customers with the GSuite and OpenAI will be incorporated at massive loss in Microsoft, which is already selling to all the Azure customers.
Magic Leap was an honest if overhyped effort that didn't achieve product-market fit.
Meanwhile, products from OpenAI and Anthropic have both done useful work for me this week.