Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.
Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.
> Hand it off. Delegate the implementation to an AI agent, a teammate, or even your future self with comprehensive notes.
The AI agent just feels like a way to create tech debt on a massive scale while not being able to identify it as tech debt.
The benefits you might gain from LLMs is that you are able to discern good output from bad.
Once that's lost, the output of these tools becomes a complete gamble.
I'd compare it to gym work: some exercises work best until they don't, and then you switch to a less effective exercise to get you out of your plateau. Same with code and AI. If you're already good (because of years of hard won lessons), it can push you that extra bit.
But yeah, default to the better exercise and just code yourself, at least on the project's core.
doubt intensifies
1. Learn how to describe what you want in an unambiguous dialect of natural language.
2. Submit it to a program that takes a long time to transform that input into a computer language.
3. Review the output for errors.
Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.
In many cases it falls on the developer to talk the PM out of the bad idea and then into a better solution. Agents aren’t equipped to do any of that.
For any non trivial problem, a PM with the same problem and 2 different dev teams will produce a drastically different solutions 99 times out of 100.
This is the bit I am having problems with: if you are rarely looking at the code, you will never have the skills to actually debug that significant escalation event.
Unless you are writing some shitty code for a random product that will be used for some demo then trashed, code can be resumed to a simple thing:
Code is a way to move ideas into the real world through a keyboard
So, reading that the future is using a random machine with an averaged output (by design), but that this output of average quality will be good enough because the same random machine will generate tests of the same quality : this is ridiculousTests are probably the thing you should never build randomly, you should put a lot of thoughts in them: do they make sense ? Do your code make sense ? With tests, you are forced to use your own code, sometimes as your users will
Writing tests is a good way to force yourself to be empathic with your users
People that are coding through IA are the equivalent of the pre-2015 area system administrators that renewed TLS certificates manually. They are people that can (and are replacing themselves) with bash scripts. I don't miss them and I won't miss this new kind.
The 2nd time it will likely be pretty different because they’ll use what they learned to build it better. The 3rd time will be better still, but each time after that it will essentially be the same product.
An LLM will never converge. It definitely won’t learn from each subsequent iteration.
Human devs are also a lot more resilient to slight changes in requirements and wording. A slight change in language that wouldn’t impact a human at all will cause an LLM to produce completely different output.
Regarding that string search, you really have to fight Claude to get it to use tree sitter consistently, I have to do a search through my codebase to build an audit list for this stuff.
Of course there are times when you need someone extremely skilled at a particular language. But from my experience I would MUCH prefer to see how someone builds out a problem in natural language and have guarantees to its success. I’ve been in too many interviews where candidates trip over syntax, pick the wrong language, or are just not good at memorization and don’t want to look dumb looking things up. I usually prefer paired programming interviews where I cater my assistance to expectations of the position. AI can essentially do that for us.
Humans are very non deterministic: if you ask me to solve a problem today, the solution will be different from last week, last year or 10 years ago. We’ve learnt to deal with it, and we can also control the non-determinism of LLMs.
And humans are also very prone to hallucinations: remember those 3000+ gods that we’ve created to explain the world, or those many religions that are completely incompatible? Even if some are true, most of them must be hallucinations just by being incompatible to the others.
If you are very experienced, you won’t solve the problem differently day to day. You probably would with a 10 year difference, but you won’t ever be running the next model 10 years out (even if the technology matures), so there’s no point in doing that comparison. Solving the same problem with the same constraints in radically different ways day to day comes from inexperience (unless you’re exploring and doing it on purpose).
Calling what LLMs do hallucinations and comparing it to human mythology is stretching the analogy into absurdity.
I believe the author was trying to specifically distinguish their workflow from that, in that they are prompting for changes to the code in terms of the code itself, and reviewing the code that is generated (maybe along with also mentioning the functionality and testing it).