I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
Perhaps LLM can be modified to step outside the circle, but as of today, it would be akin to monkeys typing.
I’m getting maybe a 10-20% productivity boost using AI on mature codebases. Nice but not life changing.
But I can't quite articulate why I believe LLMs never step outside the circle, because they are seeded with some random noise via temperature. I could just be wrong.
10-20% productivity posts have been happening regularly over the course of my career. They are normally either squandered by inefficient processes or we start building more complex systems.
When Rails was released, for certain types of projects, you could move 3 or 4x faster almost overnight.