The reality now is, that the current LLMs still often create stuff, that costs me more time to fix, than to do it myself. So I still write a lot of code myself. It is very impressive, that I can think about stopping writing code myself. But my job as a software developer is, very, very secure.
LLMs are very unable to build maintainable software. They are unable to understand what humans want and what the codebase need. The stuff they build is good-looking garbage. One example I've seen yesterday: one dev committed code, where the LLM created 50 lines of React code, complete with all those useless comments and for good measure a setTimeout() for something that should be one HTML DIV with two tailwind classes. They can't write idiomatic code, because they write code, that they were prompted for.
Almost daily I get code, commit messages, and even issue discussions that are clearly AI-generated. And it costs me time to deal with good-looking but useless content.
To be honest, I hope that LLMs get better soon. Because right now, we are in an annoying phase, where software developers bog me down with AI-generated stuff. It just looks good but doesn't help writing usable software, that can be deployed in production.
To get to this point, LLMs need to get maybe a hundred times faster, maybe a thousand or ten thousand times. They need a much bigger context window. Then they can have an inner dialogue, where they really "understand" how some feature should be built in a given codebase. That would be very useful. But it will also use so much energy that I doubt that it will be cheaper to let a LLM do those "thinking" parts over, and over again instead of paying a human to build the software. Perhaps this will be feasible in five or eight years. But not two.
And this won't be AGI. This will still be a very, very fast stochastic parrot.
So the question is, do you think the current road leads to AGI? How far down the road is it? As far as I can see, there is not a "status quo bias" answer to those questions.
Compare the automobile. Automobiles today are a lot nicer than they were 50 years ago, and a lot more efficient. Does that mean cars that never need fuel or recharging are coming soon, just because the trend has been higher efficiency? No, because the fundamental physical realities of drag still limit efficiency. Moreover, it turns out that making 100% efficient engines with 100% efficient regenerative brakes is really hard, and "just throw more research at it" isn't a silver bullet. That's not "there won't be many future improvements", but it is "those future improvements probably won't be any bigger than the jump from GPT-3 to o1, which does not extrapolate to what OP claims their models will do in 2027."
AI in 2027 might be the metaphorical brand-new Lexus to today's beat-up Kia. That doesn't mean it will drive ten times faster, or take ten times less fuel. Even if high-end cars can be significantly more efficient than what average people drive, that doesn't mean the extra expense is actually worth it.