I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
Even if code is the right medium for specifying a program, transformers act as an automated interface between that medium and natural language. Modern high-end transformers have no problem producing code, while benefiting from a wealth of knowledge that far surpasses any individual.
> Most people selling AI are predicting the obsolescence of human authors.
It's entirely possible that we do become obsolete for a wide variety of programming domains. That's simply a reality, just as weavers saw massive layoffs in the wake of the automated loom, or scribes lost work after the printing press, or human calculators became pointless after high-precision calculators became commonplace.
This replacement might not happen tomorrow, or next year, or even in the next decade, but it's clear that we are able to build capable models. What remains to be done is R&D around things like hallucinations, accuracy, affordability, etc. as well as tooling and infrastructure built around this new paradigm. But the cat's out of the bag, and we are not returning to a paradigm that doesn't involve intelligent automation in our daily work; programming is literally about automating things and transformers are a massive forward step.
That doesn't really mean anything, though; You can still be as involved in your programming work as you'd like. Whether you can find paid, professional work depends on your domain, skill level and compensation preferences. But you can always program for fun or personal projects, and decide how much or how little automation you use. But I will recommend that you take these tools seriously, and that you aren't too dismissive, or you could find yourself left behind in a rapidly evolving landscape, similarly to the advent of personal computing and the internet.
At that point it's less "programmers will be out of work" as "most work may cease to exist".
- The cost of failure is low: Most domains (physical, compliance, etc) don't have this luxury where the cost of failure is high and so the validator has more value.
- The cost to retry/do multiple simulations is low: You can perform many experiments at once, and pick the one with the best results. If the AI hallucinates, or generates something that doesn't work the agent/tool could take that error and simulate/do multiple high probability tries until it passes. Things like unit tests, compiler errors, etc make this easier.
- There are many right answers to a problem. Good enough software is good enough for many domains (e.g. a CRUD web app). Not all software is like this but many domains in software are.
What makes something hard to disrupt won't be intellectual difficulty (e.g. software harder than compliance analyst as a made up example), it will be other bottlenecks like the physical world (energy, material costs, etc), regulation (job isn't entirely about utility/output). etc.
This is not entirely sensible, some code touches the physical / compliance world. Airports, airplanes, hospitals, cranes, water systems, military they all use code to different degrees. It's true that they can perhaps afford to run experiments over landing pages, but I don't think they can simply disrupt their workers and clients on a regular basis.
Also note unlike say for physical domains where it's expensive to "tear down" until you commit and deploy (i.e. while the code is being worked on) you can try/iterate/refine via your IDE, shell, whatever. Its just text files after all; in the end you are accountable for the final verification step before it is published. I never said we don't need a verification step; or a gate before it goes to production systems. I'm saying its easier to throw away "hallucinations" that don't work and you can work around gaps in the model with iterations/retries/multiple versions until the user is happy with it.
Conversely I couldn't have an AI build a house, I don't like it, it demolishes it and builds a slightly different one, etc etc until I say "I'm happy with this product, please proceed". The sheer amount of resource waste and time spent in doing so would be enormous. I can simulate, generate plans, etc maybe with AI but nothing beats seeing the "physical thing" for some products especially when there isn't the budget/resources to "retry/change".
TL;DR the greater the cost of iteration/failure; the less likely you can use iteration to cover up gaps in your statistical model (i.e. tail risks are more likely to bite/harder to mitigate).