Ok, so they knew where Claude went wrong and could correct for it.
And none of these AI companies are profitable. Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
If everyone whose code illegally trained these models won a copyright lawsuit against Claude it would suddenly not be so good at writing swift code.
Do we really want to bet on Disney losing their AI lawsuit?
Honestly I realize my comment is not adding much to the discussion but the AI fatigue is real. At this point I think HN and other tech forums would do well to ban the topic for posts like this.
Imagine if we were upvoting stories about how people are getting lots of coding done easier with Google and StackOverflow. It would be rightfully ridiculed as vapid content. Ultimately that’s what this blog post is. I really don’t care to hear how yet another dingus “programmer” is using AI to do their hobby/job for them.
Agreed, but I wish I had it as a teacher while learning. The amount of help my interns need from me has reduced by at least 50%, and what remains is the non-trivial stuff which is completely worth my time to coach and mentor them
Indeed but one potential saving grace is the open sourcing of good models (eg. Meta’s Llama). If they continue to open source competitive models, we might be able to stave that off.
You learn far far faster from reading code and writing tests than you do just writing code alone.
I've long suspected for years that the bottleneck to software development is in code generation not keeping up with idea generation.
I think they add to expertise honestly.
Also i code all day and have yet to hit the $10/month cap on claude that the jetbrains library offer.
I also haven't had much luck with getting llms to generate useful code. I'm sure part of that is the stack I am using is much less popular (elixir) than many others, but I have tried everything even the new phoenix.new , and it is only about an 80 to 90% solution, but that remaining percentage is full of bugs or terrible design patterns that will absolutely bite in the future. In nearly everything I've tried to do, it introduces bugs and bug hunting those down is worse to me than if I just did the work manually in the first place. I have spent hours trying to coach the AI through a particular task, only to have the end solution need to be thrown away and started from scratch.
Speaking personally, My skills are atrophying the more I use the AI tools. It still feels like a worthwhile trade-off in many situations, but a trade-off it is
So, firstly, the limits on the $100/month plan are reasonably high in my experience. I do hit them, but it takes quite a bit.
> I have trouble convincing myself to give Autodesk $50/month and I need that software for my primary hobby.
Before I used Claude Code, I absolutely would have agreed with you, $100–$200 every month is just a ridiculous amount for any software. After using Claude Code... yeah, that's just how darn good it is!
> Imagine if we were upvoting stories about how people are getting lots of coding done easier with Google and StackOverflow.
You know, something I've thought about before (as in, before LLMs) is just how hard (impossible?) it would be for me to program without an internet connection. I'm constantly Googling stuff.
I can absolutely imagine that if I was a programmer before widespread internet use, and the internet launched with things like Google and lots of resources out of the gate... yeah, that would be revelatory! I'm not saying AI is necessarily the same, but it's something to think about.
This is not about syntax but about learning how to create solutions.
When you read solution you merely memorise existing ones. You don’t learn how to come up with your own.
Claude Code costs $200/month really isn't something that needs to be "revealed."
Yes, I know of https://xkcd.com/1053/
Where I’ve found them best is for generating highly focused examples of specific APIs or concepts. They’re much better at that, though hallucinations still show up from time to time.
I started with the pay-as-you go plan; I'm currently using the Claude Pro plan at $20/month which is great for my use case.
> And none of these AI companies are profitable. Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
I suspect investors will give AI companies a lot of runway. OpenAI went from $0 to over $10 billion in revenue in less than 3 years. I know that's not a profit but it bodes well for the future.
(As an aside, it took Microsoft 22 years to reach $10 billion in revenue.)
Anthropic went from $0 in 2021 to over $4 billion in about 3 years.
In comparison, it took Twitter about eleven years after its founding in 2006 to become profitable in 2017. And for much of that time, Twitter didn't have a viable business model.
I don't think investors are concerned.
Regarding lawsuits, I'm sure the AI companies will win some and lose some. Either way, after all is said and done, there will be settlements and agreements. Nothing is going to stop this train.
But—chalk one up for Anthropic for winning their case, Meta getting a favorable ruling and all the rest [1].
> won a copyright lawsuit against Claude it would suddenly not be so good at writing swift code.
I suspect Claude is going to get really good at writing Swift--Anthropic is working with Apple [2].
> …but the AI fatigue is real
You might want to get off the internet if you're already suffering from AI fatigue; things are just getting started.
[1]: "AI companies start winning the copyright fight" -- https://www.theguardian.com/technology/2025/jun/30/ai-techsc...
[2]: "Apple Partners With Anthropic for Claude-Powered AI Coding Platform" -- https://www.macrumors.com/2025/05/02/apple-anthropic-ai-codi...
You can't solve programming problems without reading code.
Period.
The more code you read the better you get at solving problems because your internal knowledgebase grows.
No we are going to have a haves and haves not situation where the poor kid who wants to get a better job and get out of poverty is competing with the dude who has a $100-200/month AI subscription writing code for him.
You need to adjust this figure for inflation. Microsoft became huge decades ago when money was worth more.
It’s not being on the Internet that’s giving me AI fatigue. My employer is forcing me to use it. It’s being used at the drive-thru window even. “Touch grass” isn’t a valid argument here.
This idea that AI will improve after the AI companies’ inevitable exits is one that isn’t backed by history. I’d like to have one exited unicorn company named that has a better value and/or lower cost now than its pre-exit state. YouTube? Netflix? Facebook? Uber? MongoDB? Atlassian? Slack?
It is legitimately hard to find a tech company that is a more desirable company to patronize post-exit than pre-exit.
We are so obviously in the customer acquisition phase of AI…people in this thread are talking about spending hundreds per month on AI services and I think all of those will double in price soon after large private company players like OpenAI go public or get acquired. It happened with Netflix, it happened with YouTube (more ads, YouTube Premium), it happened with Uber, the list goes on and on.
I do feel AI impeding my own learning in this way and it's probably the aspect I'm most concerned about.
If AI is truly progressing at the amazing rate that everyone is claiming then we should be continually getting better capability at the same price.
But I’m also not enjoying the idea of software engineering requiring a subscription to a service provider in a similar way to how creative professionals get backed in to adobe creative cloud, because up until this point most developer tools have been free or very cheap.
It then does a bad job, which, like, makes sense if it's using the flash model.
I find paying for Claude significantly less offensive than paying for e.g. Creative Cloud, because I'm largely paying for the compute resources.
In the real world, away from those whose salary depends on marketing these agentic tools, an LLM is a context shredder. It provides plausible code snippets that are globally incoherent and don't fit style. CONVENTIONS and RULES files are a kludge, a sloppy hack.
These tools flatten the deep, interconnected knowledge required to work on complex systems into a series of shallow, transactional loops that pretend to satisfy the user.
The skill being diminished is not the ability to write a single-page utility or single-purpose script. It is the ability to build and maintain a mental model of a complex machine. The ability to churn out a hundred disparate toy tools is not evidence of a superior learning method, it is evidence of a tool that excels at tasks with no deep interconnected context.
I wrote about my process for non-vibe-coded projects here: https://simonwillison.net/2025/Mar/11/using-llms-for-code/
> The skill being diminished is not the ability to write a single-page utility or single-purpose script. It is the ability to build and maintain a mental model of a complex machine.
That's the thing that LLMs help me with 90% of the time. It's also why I don't think non-programmers armed with LLMs are a threat to my career.
I guess it remains to be seen, but the cost for tokens is only going to decrease, not increase. Also, remember that OpenAI and Anthropic get a lot of revenue from large companies licensing and otherwise paying to use these models.
E.g., a job interviewer in the world of AI will be asking you to perform more complex example work under the assumption that you have access to AI.
“It’ll only get cheaper” until it doesn’t.
Moore’s law is already dead in the context of the underlying hardware manufacturing technology that powers AI. Once neural processor design reaches an end state we can’t count on any significant lithography breakthroughs powering a reduction in cost.
For example, graphics cards aren’t getting cheaper, and if they are, certainly not very quickly. A lot of consumer electronics categories key to prior tech booms like laptops and workstations aren’t getting cheaper or aren’t getting cheaper/faster very quickly.
Heck, I know a web developer who was using a raspberry pi as their main workstation.
Prior to AI the person with a used ThinkPad wasn’t really at a disadvantage when it came to most development tasks.