I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
Even if code is the right medium for specifying a program, transformers act as an automated interface between that medium and natural language. Modern high-end transformers have no problem producing code, while benefiting from a wealth of knowledge that far surpasses any individual.
> Most people selling AI are predicting the obsolescence of human authors.
It's entirely possible that we do become obsolete for a wide variety of programming domains. That's simply a reality, just as weavers saw massive layoffs in the wake of the automated loom, or scribes lost work after the printing press, or human calculators became pointless after high-precision calculators became commonplace.
This replacement might not happen tomorrow, or next year, or even in the next decade, but it's clear that we are able to build capable models. What remains to be done is R&D around things like hallucinations, accuracy, affordability, etc. as well as tooling and infrastructure built around this new paradigm. But the cat's out of the bag, and we are not returning to a paradigm that doesn't involve intelligent automation in our daily work; programming is literally about automating things and transformers are a massive forward step.
That doesn't really mean anything, though; You can still be as involved in your programming work as you'd like. Whether you can find paid, professional work depends on your domain, skill level and compensation preferences. But you can always program for fun or personal projects, and decide how much or how little automation you use. But I will recommend that you take these tools seriously, and that you aren't too dismissive, or you could find yourself left behind in a rapidly evolving landscape, similarly to the advent of personal computing and the internet.
Perhaps LLM can be modified to step outside the circle, but as of today, it would be akin to monkeys typing.
But once you add repo context, domain knowledge etc... programming languages are far too verbose.
It will also still happily turn your whole codebase into garbage rather than undo the first thing it tried to try something else. I've yet to see one that can back itself out of a logical corner.
See, this is the kind of conception of a programmer I find completely befuddling. Programming isn't like those jobs at all. There's a reason people who are overly attached to code and see their job as "writing code" are pejoratively called "code monkeys." Did CAD kill the engineer? No. It didn't. The idea is ridiculous.
I'm sure you understand the analogy was about automation and reduction in workforce, and that each of these professions have both commonalities and differences. You should assume good faith and interpret comments on Hacker News in the best reasonable light.
> There's a reason people who are overly attached to code and see their job as "writing code" are pejoratively called "code monkeys."
Strange. My experience is that "code monkeys" don't give a crap about the quality of their code or its impact with regards to the product, which is why they remain programmers and don't move on to roles which incorporate management or product responsibilities. Actually, the people who are "overly attached to code" tend to be computer scientists who are deeply interested in computation and its expression.
> Did CAD kill the engineer? No. It didn't. The idea is ridiculous.
Of course not. It led to a reduction in draftsmen, as now draftsmen can work more quickly and engineers can take on work that used to be done by draftsmen. The US Bureau of Labor Statistics states[0]:
Expected employment decreases will be driven by the use of computer-aided design (CAD) and building information modeling (BIM) technologies. These technologies increase drafter productivity and allow engineers and architects to perform many tasks that used to be done by drafters.
Similarly, the other professions I mentioned were absorbed into higher-level professions. It has been stated many times that the future focus of software engineers will be less about programming and more about product design and management.I saw this a decade ago at the start of my professional career and from the start have been product and design focused, using code as a tool to get things done. That is not to say that I don't care deeply about computer science, I find coding and product development to each be incredibly creatively rewarding, and I find that a comprehensive understanding of each unlocks an entirely new way to see and act on the world.
[0] https://www.bls.gov/ooh/architecture-and-engineering/drafter...
A well-designed agent can absolutely roll back code if given proper context and access to tooling such as git. Even flushing context/message history becomes viable for agents if the functionality is exposed to them.
He couldn't land a job that paid more than minimum wage after that.
At that point it's less "programmers will be out of work" as "most work may cease to exist".
But the second you start iterating with them... the codebase goes to shit, because they never delete code. Never. They always bolt new shit on to solve any problem, even when there's an incredibly obvious path to achieve the same thing in a much more maintainable way with what already exists.
Show me a language model that can turn rube goldberg code into good readable code, and I'll suddenly become very interested in them. Until then, I remain a hater, because they only seem capable of the opposite :)
This is a phenomenon that seems to be experienced more and more frequently as the industrial revolution continues... The craft of drafting goes back to 2000 B.C.[0] and while techniques and requirements gradually changed over thousands of years, the digital revolution suddenly changed a ton of things all at once in drafting and many other crafts. This created a literacy gap many never recovered from.
I wonder if we'll see a similar split here with engineers and developers regarding agentic and LLM literacy.
> Show me a language model that can turn rube goldberg code into good readable code, and I'll suddenly become very interested in them.
They can already do this. If you have any specific code examples in mind, I can experiment for you and return my conclusions if it means you'll earnestly try out a modern agentic workflow.
I doubt it. I've experimented with most of them extensively, and worked with people who use them. The atrocious results speak for themselves.
> They can already do this. If you have any specific code examples in mind
Sure. The bluetooth drivers in the Linux kernel contain an enormous amount of shoddy duplicated code that has amalgamated over the past decade with little oversight: https://code.wbinvd.org/cgit/linux/tree/drivers/bluetooth
An LLM which was capable of refactoring all the duplicated logic into the common core and restructuring all the drivers to be simpler would be very very useful for me. It ought to be able to remove a few thousand lines of code there.
It needs to do it iteratively, in a sting of small patches that I can review and prove to myself are correct. If it spits out a giant single patch, that's worse than nothing, because I do systems work that actually has to be 100% correct, and I can't trust it.
Show me what you can make it do :)
What academics are you rubbing shoulders with? Every single computer scientist I have ever met has projects where every increment in the major version goes like:
"I was really happy with my experimental kernel, but then I thought it might be nice to have hotpatching, so I abandoned the old codebase and started over from scratch."
The more novel and cutting edge the work you do is, the more harmful legacy code becomes.
Will they fail to do it in practice once they poison their own context hallucinating libraries or functions that don’t exist? Absolutely.
That’s the tricky part of working with agents.
I’m getting maybe a 10-20% productivity boost using AI on mature codebases. Nice but not life changing.
I have similar view of the future as you do. But I'm just curious what the quoted text here means in practice. Did you go into product management instead of software engineer for example?
They didn’t just see layoffs. There were the constant wars with Napoleon and the War of 1812 causing significant economic instability along with highly variable capital investments in textile production at the time. They we’re looking at huge wealth disparity and losing their jobs for most meant losing everything.
What many Luddite supporters were asking for in many parts of England were: better working conditions, a raise to minimum wage, abolishment of child labour, etc. Sabotage was a means to make such demands from a class that held almost all of the power.
Many of those protestors were shot. Those who survived and were laid off were forced into workhouses.
The capitalists won and got to write the history and the myths. They made it about the technology and not the conditions. They told us that the displaced workers found new, better jobs elsewhere.
Programmers, while part of the labour class, have so far enjoyed a much better bargaining position and have been compensated in kind. Many of us also complain about the quality of output from AI as the textile workers complained about the poor quality of the lace. But fortunately the workhouses were shut down. Although poor quality code tends to result in people losing their life’s savings, having their identities stolen, etc. Higher stakes than cheap lace.
History is not repeating but it sure does rhyme.
But I can't quite articulate why I believe LLMs never step outside the circle, because they are seeded with some random noise via temperature. I could just be wrong.
That's not true in my experience. Several times now i've given Claude Code a too-challenging task and after trying repeatedly it eventually gave up, removing all the previous work on that subject and choosing an easier solution instead.
.. unfortunately that was not at all what i wanted lol. I had told it "implement X feature with Y library", ie specifically the implementation i wanted to make progress towards, and then after a while it just decided that was difficult and to do it differently.
- The cost of failure is low: Most domains (physical, compliance, etc) don't have this luxury where the cost of failure is high and so the validator has more value.
- The cost to retry/do multiple simulations is low: You can perform many experiments at once, and pick the one with the best results. If the AI hallucinates, or generates something that doesn't work the agent/tool could take that error and simulate/do multiple high probability tries until it passes. Things like unit tests, compiler errors, etc make this easier.
- There are many right answers to a problem. Good enough software is good enough for many domains (e.g. a CRUD web app). Not all software is like this but many domains in software are.
What makes something hard to disrupt won't be intellectual difficulty (e.g. software harder than compliance analyst as a made up example), it will be other bottlenecks like the physical world (energy, material costs, etc), regulation (job isn't entirely about utility/output). etc.
It is not a reality since it has not happen. In the real world it has not happened.
There is no reason to believe that the current rate of progress will continue. Intelligence is not like the weaving machines. A software engineer is not a human calculator.
In my case, I am referring to a deep appreciation of code itself, not any particular piece of code.
The issue with natural language isn’t that it’s impossible to be precise, it’s that most people aren’t, or they are precise about what they want it to do for them, but not what the computer needs to do to make it happen. This leads to a lot of guessing by engineerings as they try to translate the business requirements into code. Now the LLM is doing that guessing, often with less context about the broader business objectives, or an understanding of the people writing those requirements.
No.
Some were concerned that the output of compilers couldn’t match the quality of what could be done by a competent programmer at the time. That was true for a time. Then compilers got better.
Nobody was concerned that compilers were going to be used by capitalists to lay them off and seize the means of producing programs by turning it into property.
Treating LLMs as a scaffolding tool yields better results at least for me personally. I just brain dump what I'm thinking of building and ask for it to give me models, and basic controllers using said models, then I just worry about the views and business logic.
10-20% productivity posts have been happening regularly over the course of my career. They are normally either squandered by inefficient processes or we start building more complex systems.
When Rails was released, for certain types of projects, you could move 3 or 4x faster almost overnight.
This is not entirely sensible, some code touches the physical / compliance world. Airports, airplanes, hospitals, cranes, water systems, military they all use code to different degrees. It's true that they can perhaps afford to run experiments over landing pages, but I don't think they can simply disrupt their workers and clients on a regular basis.
Also note unlike say for physical domains where it's expensive to "tear down" until you commit and deploy (i.e. while the code is being worked on) you can try/iterate/refine via your IDE, shell, whatever. Its just text files after all; in the end you are accountable for the final verification step before it is published. I never said we don't need a verification step; or a gate before it goes to production systems. I'm saying its easier to throw away "hallucinations" that don't work and you can work around gaps in the model with iterations/retries/multiple versions until the user is happy with it.
Conversely I couldn't have an AI build a house, I don't like it, it demolishes it and builds a slightly different one, etc etc until I say "I'm happy with this product, please proceed". The sheer amount of resource waste and time spent in doing so would be enormous. I can simulate, generate plans, etc maybe with AI but nothing beats seeing the "physical thing" for some products especially when there isn't the budget/resources to "retry/change".
TL;DR the greater the cost of iteration/failure; the less likely you can use iteration to cover up gaps in your statistical model (i.e. tail risks are more likely to bite/harder to mitigate).