Ok, so they knew where Claude went wrong and could correct for it.
So unless you have 15+ yoe better add more reps. You can always switch to llm code assist in a blink, there is no barrier to entry at all.
Agreed, but I wish I had it as a teacher while learning. The amount of help my interns need from me has reduced by at least 50%, and what remains is the non-trivial stuff which is completely worth my time to coach and mentor them
You learn far far faster from reading code and writing tests than you do just writing code alone.
I've long suspected for years that the bottleneck to software development is in code generation not keeping up with idea generation.
I think they add to expertise honestly.
I also haven't had much luck with getting llms to generate useful code. I'm sure part of that is the stack I am using is much less popular (elixir) than many others, but I have tried everything even the new phoenix.new , and it is only about an 80 to 90% solution, but that remaining percentage is full of bugs or terrible design patterns that will absolutely bite in the future. In nearly everything I've tried to do, it introduces bugs and bug hunting those down is worse to me than if I just did the work manually in the first place. I have spent hours trying to coach the AI through a particular task, only to have the end solution need to be thrown away and started from scratch.
Speaking personally, My skills are atrophying the more I use the AI tools. It still feels like a worthwhile trade-off in many situations, but a trade-off it is
This is not about syntax but about learning how to create solutions.
When you read solution you merely memorise existing ones. You don’t learn how to come up with your own.
Where I’ve found them best is for generating highly focused examples of specific APIs or concepts. They’re much better at that, though hallucinations still show up from time to time.
You can't solve programming problems without reading code.
Period.
The more code you read the better you get at solving problems because your internal knowledgebase grows.
In the real world, away from those whose salary depends on marketing these agentic tools, an LLM is a context shredder. It provides plausible code snippets that are globally incoherent and don't fit style. CONVENTIONS and RULES files are a kludge, a sloppy hack.
These tools flatten the deep, interconnected knowledge required to work on complex systems into a series of shallow, transactional loops that pretend to satisfy the user.
The skill being diminished is not the ability to write a single-page utility or single-purpose script. It is the ability to build and maintain a mental model of a complex machine. The ability to churn out a hundred disparate toy tools is not evidence of a superior learning method, it is evidence of a tool that excels at tasks with no deep interconnected context.
I wrote about my process for non-vibe-coded projects here: https://simonwillison.net/2025/Mar/11/using-llms-for-code/
> The skill being diminished is not the ability to write a single-page utility or single-purpose script. It is the ability to build and maintain a mental model of a complex machine.
That's the thing that LLMs help me with 90% of the time. It's also why I don't think non-programmers armed with LLMs are a threat to my career.