I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.
That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.
When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.
It is really the same kind of thing.. but the model is "smarter" then a junior engineer usually. You can say something like "hmm.. I think an event bus makes sense here" Then the LLM will do it in 5 seconds. The problem is that there are certain behavioral biases that require active reminding (though I think some MCP integration work might resolve most of them, but this is just based on the current Claude Code and Opus/Sonnet 4 models)
lol sounds like a true nightmare. Code is a liability. Faster junior coding = more crap code = more liability.
However if you can quickly read code, see and succintly communicate the more optimal solution, you can easily 10x-20x your ability to code.
I'm begining to believe it may primarily come down to having the vocabulary and linguistic ability to succintly and clearly state the gaps in the code.
Do you believe you've managed to solve the most common wisdom in the software engineering industry? That reading code is much harder than writing it? If you have, then you should write up a white paper for the rest of us to follow.
Because every time I've seen someone say this, it's from someone that doesn't actually read the code they're reviewing.