The challenge with the bubble/not bubble framing is the question of long term value.
If the labs stopped spending money today, they would recoup their costs. Quickly.
There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
So the question is: at what point does any of this stop being true?
If that is the case at some point the music is going to stop and they will either perish or they will have to crank up their subscription costs.
I use claude code exclusively for the initial version of all new features, then I review and iterate. With the Max plan I can have many of these loops going concurrently in git worktrees. I even built a little script to make the workflow better: http://github.com/jarredkenny/cf
As I said above, I don’t think a single AI company is remotely in the black yet. They are driven by speculation and investment and they need to figure out real quick how they’re going to survive when that money dries up. People are not going to fork out 24k a year for these tools. I don’t think they’ll spend even $10k. People scoff at paying $70+ for internet, a thing we all use basically all the time.
I have found it rather odd that they have targeted individual consumers for the most part. These all seem like enterprise solutions that need to charge large sums and target large companies tbh. My guess is a lot of them think it will get cheaper and easier to provide the same level of service and that they won’t have to make such dramatic increases in their pricing. Time will tell, but I’m skeptical
Maybe. But that would probably be temporary. The market is sufficiently dynamic that any advantages they have right now, probably isn't stable defensible longer term. Hence the need to keep spending. But what do I know? I'm not a VC.
My assessment so far is that it is well worth it, but only if you're invested in using the tool correctly. It can cause as much harm as it can increase productivity and i'm quite fearful of how we'll handle this at day-job.
I also think it's worth saying that imo, this is a very different fear than what drives "butts in seats" arguments. Ie i'm not worried that $Company will not get their value out of the Engineer and instead the bot will do the work for them. I'm concerned that Engineer will use the tool poorly and cause more work for reviewers having to deal with high LOC.
Reviews are difficult and "AI" provides a quick path to slop. I've found my $200 well worth it, but the #1 difficulty i've had is not getting features to work, but in getting the output to be scalable and maintainable code.
Sidenote, one of the things i've found most productive is deterministic tooling wrapping the LLM. Eg robust linters like Rust Clippy set to automatically run after Claude Code (via hooks) helps bend the LLM away from many bad patterns. It's far from perfect of course, but it's the thing i think we need most atm. Determinism around the spaghetti-chaos-monkeys.
If you discusses a plan with CC well upfront, covering all integration points where things might go off rail, perhaps checkpoint the plan in a file then start a fresh CC session for coding, then CC is usually going to one shot a 2k-LoC feature uninterrupted, which is very token efficient.
If the plan is not crystal clear, people end up arguing with CC over this and that. Token usage will be bad.
Now I just find myself exasperated at its choices and constant forgetfulness.
The only answer that matters is the one to the question "how much more are you making per month from your $200/m spend?"
I'm just worried that I'm doing it wrong.
Claude 3.7 Sonnet supposedly cost "a few tens of millions of dollars"[1], and they recently hit $4B ARR[2].
Those numbers seem to give a fair bit of room for salaries, and it would be surprising if there wasn't a sustainable business in there.
[1] https://techcrunch.com/2025/02/25/anthropics-latest-flagship...
[2] https://www.theinformation.com/articles/anthropic-revenue-hi...
Which puts the current valuations I've heard pretty much in the right ballpark. Crazy, but it could make sense.
But productivity software in general, only a few large companies seem to be able to get away with it. The Office Suite, CRM such as SalesForce.
In the graphics world, Maya and 3DS Max. Adobe has been holding on.
If you need to repeatedly remind it to do something though, you can store it in claude.md so that it is part of every chat. For example, in mine I have asked it to not invoke git commit but to review the git commit message with me before committing, since I usually need to change it.
There may be a maximum amount of complexity it can handle. I haven't reached that limit yet, but I can see how it could exist.
I've found though that if you can steer it in the right direction it usually works out okay. It's not particularly good at design, but it's good at writing code, so one thing you can do is say write classes and some empty methods with // Todo Claude: implement, then ask it to implement the methods with Todo Claude in file foo. So this way you get the structure that you want, but without having to implement all the details.
What kind of things are you having issues with?
The goal for investors is to be able to exit their investment for more than they put in.
That doesn't mean the company needs to be profitable at all.
Broadly speaking, investors look for sustainable growth. Think Amazon, when they were spending as much money as possible in the early 2000s to build their distribution network and software and doing anything they possibly could to avoid becoming profitable.
Most of the time companies (and investors) don't look for profits. Profits are just a way of paying more tax. Instead the ideal outcome is growing revenue that is cost negative (ie, could be possible) but the excess money is invested in growing more.
Note that this doesn't mean the company is raising money from external sources. Not being profitable doesn't imply that.