AI would need to 1. perform better than a person in a particular role, and 2. do so cheaper than their total cost, and 3. do so with fewer mistakes and reduced liability.
Humans are objectively quite cheap. In fact for the output of a single human, we're the cheapest we've ever been in history (particularly in relation to the cost of the investment in AI and the kind of roles AI would be 'replacing.')
If there is any economic shifts, it will be increases in per person efficiency, requiring a smaller workforce. I don't see that changing significantly in the next 5-10 years.
I think the flaws are going to be solved for, and if that happens, what do you think? I do believe there needs to be a human in the loop, but I don't think there needs to be humans, plural. Eventually.
I believe this is denial. The statement that the best AI can't be reliable enough to do a modest refactoring is not correct. Yes, it can. What it currently cannot do is write a full app from start to finish, but they're working on longer task execution. And this is before any of the big data centers have even been built. What happens then? You get the naysayers that say, "Well, the scaling laws don't apply," but there's a lot of people who think they do apply.
Anyway, I appreciate the response. I don't know how old you are, but I'm kind of old. And I've noticed that I've become much more cynical and pessimistic, not necessarily for any good reasons. So maybe it's just that.
It does still need an experienced human to review its work, and I do regularly find issues with its output that only a mid-level or senior developer would notice. For example, I saw it write several Python methods this week that, when called simultaneously, would lead to deadlock in an external SQL database. I happen to know these methods WILL be called simultaneously, so I was able to fix the issue.
In existing large code bases that talk to many external systems and have poorly documented, esoteric business rules, I think Claude and other AIs will need supervision from an experienced developer for at least the next few years. Part of the reason for that is that many organizations simply don't capture all requirements in a way that AI can understand. Some business rules are locked up in long email threads or water cooler conversations that AI can't access.
But, yeah, Claude is already acting like a team of junior/mid-level developers for me. Because developers are highly paid, offloading their work to a machine can be hugely profitable for employers. Perhaps, over the next few years, developers will become like sys admins, for whom the machines do most of the meaningful work and the sys admin's job is to provision, troubleshoot and babysit them.
I'm getting near the end of my career, so I'm not too concerned about losing work in the years to come. What does concern me is the loss of knowledge that will come with the move to AI-driven coding. Maybe in ten years we will still need humans to babysit AI's most complicated programming work, but how many humans will there be ten years from now with the kind of deep, extensive experience that senior devs have today? How many developers will have manually provisioned and configured a server, set up and tuned a SQL database, debugged sneaky race conditions, worked out the kinks that arise between the dozens of systems that a single application must interact with?
We already see that posts to Stack Overflow have plummeted since programmers can simply ask ChatGPT or Claude how to solve a complex SQL problem or write a tricky regular expression. The AIs used to feed on Stack Overflow for answers. What will they feed on in the future? What human will have worked out the tricky problems that AI hasn't been asked to solve?
I read a few years ago that the US Navy convinced Congress to fund the construction of an aircraft carrier that the Navy didn't even need. The Navy's argument was that it took our country about eighty years to learn how to build world-class carriers. If we went an entire generation without building a new carrier, much or all of that knowledge would be lost.
The Navy was far-sighted in that decision. Tech companies are not nearly so forward thinking. AI will save them money on development in the short run, but in the long run, what will they do when new, hard-to-solve problems arise? A huge part of software engineering lies in defining the problem to be solved. What happens when we have no one left capable of defining the problems, or of hammering out solutions that have not been tried before?
I disagree with that statement when it comes to software developers. They are actually quite expensive. The typically enter the workforce with 16 years of education (assuming they have a college degree), and may also have a family and a mortgage. They have relatively high salaries, plus health insurance, and they can't work when they're sleeping, sick or on vacation.
I once worked for a software consultancy where the owner said, "The worst thing about owning this kind of company is that all my capital walks out the door at six p.m."
AI won't do that. It'll work round the clock if you pay for it.
We do still need a human in the loop with AI. In part, that's to check and verify its work. In part, it's so the corporate overlords have someone to fire when things go wrong. From the looks of things right now, AI will never be "responsible" for its own work.
- talking to people to understand how to leverage their platform and to get them to build what I need
- work in closed source codebases. I know where the traps and the foot guns are. Claude doesn’t
- telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem
In short, I can think and I can learn. LLMs can’t.
That said, in the meantime, I'm not confident that I'd be able to find another job if I lost my current one, because I not only have to compete against every other candidate, I also need to compete against the ethereal promise of what AI might bring in the near future.
You’re right it wouldn’t replace everyone, but businesses will need less people to maintain.
I don't have to write code anymore, and the code that's coming out needs less and less of my intervention. Maybe I'm just much better at prompting than other people. But I doubt that
The two things I hear are:
1. You'll always need a human in the loop
2. AI isn't any good at writing code
The first one sounds more plausible, but it means less programmers over time.
This one is huge. I’ve personally witnessed many situations where a multi-million dollar mistake was avoided by a domain expert shutting down a bad idea. Good leadership recognizes this value. Bad leadership just looks at how much code you ship
So no, I'm not worried.
what’s your plan when today’s ai functionality costs 10000x more?
Before this I was a JavaScript developer. I can absolutely see AI replacing most JavaScript developers. It felt really autistic with most people completely terrified to write original code. Everything had to be a React template with a ton of copy/paste. Watch the emotional apocalypse when you take React away.
And I think you're right. Cross-function is super important. That's why I think the next consolidation is going to roll up to product development. Basically the product developers that can use AI and manage the full stack are going to be successful but I don't know how long that will last.
What's even more unsettling to me is it's probably going to end up being completely different in a way that nobody can predict. Your predictions, my predictions might be completely wrong, which is par for the course.
The codebases I work on, I can pretty much delegate more and more to AI as time goes on. There's no doubt about that. They're not big unwieldy codebases that have lots of technical debt necessarily but maybe those get quickly replaced over time.
I just don't see this argument holding out over time that the AI will always make mistakes. I would love to be proven wrong though with a counter argument that jives
It's interesting you mention the loss of knowledge. I've heard that China has adopted AI in their classrooms to teach students at a much faster pace than western countries. Right now I'm using it to teach me how to write a reverb plug-in because I don't know anything about DSP and it's doing a pretty good job at that.
So maybe there has to be some form of understanding. I need to understand how reverb works, how DSP works in order to be able to make decisions on it, not necessarily implementation. And some things are hard enough to just understand and maybe that's where the differentiation comes in
I have a few co workers who are deep into the current AI trends. I also have the pleasure of reviewing their code. The garbage that gets pushed is insane. I feel I can’t comment on a lot of the issues I see because there’s just so much slop and garbage that hasn’t been thought through that it would be re-writing half of their PR. Maybe it speaks more to their coding ability for accepting that stuff. I see comments that are clearly AI written and pushed like it hasn’t been reviewed by a human. I guard public facing infrastructure and apps as much as I can for fear of this having preventable impacts on customers.
I think this is just more indicative that AI assists can be powerful, but in the hands of an already decent developer.
I kind of lost respect for these developers deep into the AI ecosystem who clearly have no idea what’s being spat out and are just looking to get 8 hours of productivity in the span of 2 or 3.
99% of the work I've ever received from humans has been error riddled garbage. <1% of the work I've received from a machine has been error riddled garbage. Granted I don't work in code, I work in a field that's more difficult for a machine
Maybe I am terrible at prompting. But I am using AI all the time in the form of chat (rather than it coding directly) and find it very useful for that. I have also used code gen in other contexts (making websites/apps) and been able to generate lots of great stuff quickly. I also compare my style of prompting to people who are claiming it writes all their code for them and don't see much difference. So while it's possible my prompts aren't perfect in my scenarios at work it doesn't seem likely that they are so bad that I could ever improve them enough to change the output from literally useless to taking my job.
To give some context to this, the core of my job that I am refering to here is basically taking documentation about Bluetooth and other wireless protocols and implementing code that parses all that and shows it in a GUI at runs on desktop and android.
There are a lot of immediate barriers for gen ai. Half of my debugging involves stuff like repeatedly pairing Bluetooth speakers to my phone or physical pluging stuff in and AI just can't do that.
Second, the documentation about how these protocols works is gnarly. Giving AI the raw PDFs and asking basic questions about them yields very poor results. I assume there isn't enough of these kinds of documents in their training data. Also lot of the information is not text based and is instead contained in diagrams which I don't think the AI parses at all. This is all assuming there is an actual document to work with. For the latest Bluetooth features what you actually have is a bunch of word documents with people arguing half in the tracked changes and the other half of the argument is in the email chain.
Maybe I could take all that information and condense it into a form that the AI can parse? Not really. The information is already very complex and specified and I don't see how I could explain it to an AI in a way that would be any less ambiguous than just writing the code myself. Also that would assume I actually understand the specification which I never do until I write the code.
Maybe I could just choose one specific little feature I need and get AI to do it? That works, but the feature was probably only 5 lines of code anyways so I spent more time writing the the prompt than I would have writing the prompt. It was the last 2 hours of reading the spec that would actually be useful to automate. Maybe AI could have written all the code in my 1k lines PR but when the PR took 4 months and me literally flying to another country to test it with other hardware, writing the code is not the bottleneck.
Maybe the AI models will get better and be able to do all this. But that isn't just a case of ai models continuing to get the kinds of incremental imrpoves we have been seeing. They would need a leap forward to something people might call AGI to be able to do all this. Maybe that will happen tommorow but it seems just as likely to happen in 5 years or 5 decades. I don't see anyone right now with an idea of how to get there.
And if I'm wrong and AI can do all this by next year? I'll just switch to writing FPGA code which my company desperately needs more people to do and which AI is another order of magnitude more useless at doing.
I predicted commoditization happening back in 2016 when I saw no matter what I learned, it was going to be impossible to stand out from the crowd on the enterprise dev side of the market or demand decent top of market raises.[1]
I knew back then that the answer was going to be filling in the gaps with soft skills, managing larger more complex problems, being closer to determining business outcomes, etc.
I pivoted into customer facing cloud consulting specializing in application development (“application modernization”). No I am not saying “learn cloud”.
But focusing on the commodization angle. When I was looking for a job in late 2023, after being Amazoned, as a Plan B, I submitted literally 100s of applications. Each open req had hundreds of applicants and my application let alone resume was viewed maybe 5 times (LinkedIn shows you).
My plan A of using my network and targeted outreach did result in 3 offers within three weeks.
The same pattern emerged in 2024 when I was out looking again.
I’m in the interviewer pool at my current company, our submitting an application to job offer rate is 0.4%.
[1] I am referring to the enterprise dev market where most developers in the US work
However, that worry is replaced by the fear that so many people could lose their jobs that a consequence could be a complete collapse of the social safety net that is my only income source.
I just don’t see OpenAI being long term viable
If I were their tech lead I would make them do exactly that. Over and over until they got the point or they got a bad performance review and got PIPed out.
The quality bar is the bar, and it's there for a reason (in my case, I work on security and safety critical stuff, so management has my back (usually)).
I'm glad when people can use AI to help themselves. If it speeds them up, great. I don't care if it writes 100% of their code. They still are responsible for making sure it holds the quality bar. If they constantly submit code that doesn't meet the quality bar they have a performance problem regardless of the tools they used.
Yes. And we will have millions of systems thus millions of employed developers (or sysadmins).
1) Nearly all the job losses I've dealt with was when a company runs low on money. This is because it cost too much/too long to build a product or get it into market.
2) LLMs are in the sweet spot of doing the things I don't want to do (writing flawless algorithms from known patterns, sifting through 2000-line logs) and not doing the sweet spot of doing what I'm good at (business cases, feature prioritization, juice). Engineering work now involves more fact checking and "data sheet reading" than it used to, which I'm happy to do.
3) Should programming jobs be killed, there will be more things to sell. And more roles for business/product owners. I'm not at all opposed to selling the things that the AI is making.
4) Also Gustafson's Law. All the cloud stuff led to things like Facebook and Twitch, which created a ton more jobs. I don't believe we'll see things like "vibe code fixer". But we'll probably see things like robotics running on a low latency LLM brain which unlocks a different host of engineering challenges. In 10 years, it might be the norm to create household bots and people might be coding apps based on how they vacuum the house and wipe the windows.
5) I don't take a high salary. The buffer between company profit and my costs is big enough that they don't feel the need to squeeze every drop out of me. They make more profit paying both me and the AI and the colleagues than they would just paying the AI.