If anything, it's a tool for junior devs to get better and spend more time on the architecture.
Using AI code without fully understanding it (ie operated by a non-programmer) is just recipe for disaster.
I wouldn't be surprised if people would continue to deny the actual intelligence of these models even in a scenario where they were able to solve the Riemann hypothesis.
"Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" - cit
* Learn more of the entire stack, especially the backend, and devops.
* Embrace the increased productivity on offer to ship more products, solo projects, etc
* Be highly selective as far as possible in how you spend your productive time: being uber-effective can mean thinking and planning in longer timescales.
* Set up an awesome personal knowledge management system and agentic assistants
Yeah, this sort of "AI" is still nothing more than a glorified “Chinese room” (https://www.wikiwand.com/en/articles/Chinese_room).
To illustrate:
For people who aren't in SV for whatever reason and haven't seen the really high pay associated with being there - SWE is just a standard job often stressful with lots of learning required ongoing. The pain/anxiety of being disrupted is even higher then since having high disposable income to invest/save would of been less likely. Software to them would of been a job with comparable pay's to other jobs in the area; often requiring you to be degree qualified as well - anecdotally many I know got into it for the love; not the money.
Who would of thought the first job being automated by AI would be software itself? Not labor, or self driving cars. Other industries either seem to have hit dead ends, or had other barriers (regulation, closed knowledge, etc) that make it harder to do. SWE's have set an example to other industries - don't let AI in or keep it in-house as long as possible. Be closed source in other words. Seems ironic in hindsight.
So I'd say that the AI race is starting to plateau a bit recently.
For example, systems don't always work in the way they're documented to. How is an AI going to differentiate cases where there's a bug in a service vs a bug in its own code? How will an AI even learn that the bug exists in the first place? How will an AI differentiate between someone reporting a bug and a hacker attempting to break into a system?
The world is a complex place and without ACTUAL artificial intelligence we're going to need people to at least guide AI in these tricky situations.
My advice would be to get familiar with using AI and new AI tools and how they fit into our usual workflows.
Others may disagree, but I don't think software engineers (at least ones the good ones) are going anywhere.
I actually wonder about this. Is it better to gain some relatively mediocre experience at lots of things? AI seems to be pretty good at lots of things.
Or would it be better to develop deep expertise in a few things? Areas where even smart AI with reasoning still can get tripped up.
Trying to broaden your base of expertise seems like it’s always a good idea, but when AI can slurp the whole internet in a single gulp, maybe it isn’t the best allocation of your limited human training cycles.
The real answer is either to pivot to a domain where the computer use/coding skills are secondary (i.e. you need the knowledge but it isn't primary to the role) or move to an industry which isn't very exposed to AI either due to natural protections (e.g. trades) or artifical ones (e.g regulation/oligopolies colluding to prevent knowledge leaking to AI). May not be a popular comment on this platform - I would love to be wrong.
You assume nothing LLMs do are actually generalization. Once Field X is eaten the labs will pivot and use the generalization skills developed to blow out Field Y to make the next earnings report. I think at this current 10x/yr capability curve (Read: 2 years -> 100x 4 years -> 10000x) I'll get screwed no matter what is chosen. Especially the ones in proximity to computing, which makes anything in which coding is secondary fruitless. Regulation is a paper wall and oligopolies will want to optimize as much as any firm. Trades are already saturating.
This is why I feel completely numb about this, I seriously think there is nothing I can do now. I just chose wrong because I was interested in the wrong thing.
if you rule out ASI, then that means progress is going to have to slow. consider that programming has been getting more and more automated continually since 1954. so put yourself in a position where what LLMs can do is a complement to what you can do. currently you still need to understand how software works in order to operate one of these things successfully.
This is a really accessible setup and is great for my current needs. Taking it to the next stage with agentic assistants is something I'm only just starting out on. I'm looking at WilmerAI [1] for routing ai workflows and Hoarder [2] to automatically ingest and categorize bookmarks, docs and RSS feed content into a local RAG.
https://www.reddit.com/r/LocalLLaMA/comments/1i1kz1c/sharing...
However rationally I can see where these models are evolving, and it leads me to think the software industry is on its own here at least in the short/medium term. Code and math, and with math you typically need to know enough about the domain know what abstract concept to ask, so that just leaves coding and software development. Even for non technical people they understand the result they want of code.
You can see it in this announcement - it's all about "code, code, code" and how good they are in "code". This is not by accident. The models are becoming more specialised and the techniques used to improve them beyond standard LLM's are not as general to a wide variety of domains.
We engineers think AI automation is about difficulty and intelligence, but that's only partly true. Its also about whether the engineer has the knowledge on what they want to automate, the training data is accessible and vast, and they even know WHAT data is applicable. This combination of both deep domain skills and AI expertise is actually quite rare which is why every AI CEO wants others to go "vertical" - they want others to do that leg work on their platforms. Even if it eventuates it is rare enough that, if they automate, will automate a LOT slower not at the deltas of a new model every few months.
We don't need AGI/ASI to impact the software industry; in my opinion we just need well targeted models that get better at a decent rate. At some point they either hit a wall or surpass people - time will tell BUT they are definitely targeting SWE's at this point.
You can definitely succumb to the fear. It sounds like you have. But courage isn't the absence of fear, it's what you do in the face of it. Are you going to let that fear paralyze you into inaction? Just not do anything other than post about being scared to the Internet? Or, having identified that fear, are you gonna wrestle it down to the ground and either choose to retrain into anything else and start from near zero, but it'll be something not programming that you believe isn't right about to be automated away, or dive in deeper, and get a masters in AI and learn all of the math behind LLMs and be an ML expert that trains the AI. That jobs not going away, there's still a ton of techniques to be discovered/invented and all of the niches to be discovered. Fine-tuning an existing LLM to be better at some niche is gonna be hot for a while.
You're lucky, you're in a position to be able to go for a masters, even if you don't choose that route. Others with a similar doomer mindset have it worse, being too old and not in a position to them consider doing a masters.
Face the fear and look into the future with eyes wide open. Decide to go into chicken farming or nursing or firefighter or aircraft mechanic or mortician or locksmith or beekeeping or actuary.
As a new career I'd probably not choose SWE now. But if you've done 10 years already I'd ride it out, there is a good chance most of us will remain employed for many years to come.
This is what I mean by generalization skills. You need trillions of lines of code to RL a model into a good SWE right now, but as the models grow more capable you will probably need less and less. Eventually we may hit the point where a large corporations internal data in any department is enough to RL into competence, and then it frankly doesn't matter for any field once individual conglomerates can start the flywheel.
This isn't an absurdity. Man can "RL" itself into competence in a single semester of material, a laughably small amount of training data compared to an LLM.
e.g. if software is 5x less cost to make, demand will go up more than 5x as supply is highly limited now. Lots of companies want better software but it costs too much.
That will create more jobs.
They'll be more product management and human interaction and edge case testing and less typing. Although I think there'll be a bunch of very technical jobs to debug things when the models fail.
So my advice is learn skills that help make software useful to people and businesses - from user research to product management. As well as engineering.
have you ever seen those experiments where they asked people to draw a picture of a bicycle, from memory? people’s pictures made no mechanical sense. often people’s understanding of software is like that — even more so because it’s abstract and many parts are invisible.
learning to clearly describe what software should do is a very artificial skill that at a certain point, shades into part of software engineering.
once the ai gets smart enough that it only requires an intern to make the prompt and solve the few mistakes, development cost will be worth nothing.
there is only so much demand for software development.
Those people with cross domain knowledge in an industry will continue to have value for some time able to contribute to domain discussions and execute better with the tech. As a result I've always thought the "engineering" part of software was more valuable than the CS/Leetcode part of the industry. As a lecturer many decades ago told me in a SE course - "you will know more about their business, in greater detail by the time you are finished, then they even do".