←back to thread

358 points andrewstetsenko | 3 comments | | HN request time: 0.001s | source
Show context
agentultra ◴[] No.44360677[source]
… because programming languages are the right level of precision for specifying a program you want. Natural language isn’t it. Of course you need to review and edit what it generates. Of course it’s often easier to make the change yourself instead of describing how to make the change.

I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.

replies(6): >>44360934 #>>44361057 #>>44361209 #>>44361269 #>>44364351 #>>44366148 #
soulofmischief ◴[] No.44360934[source]
Transformers can be used to automate testing, create deeper and broader specification, accelerate greenfield projects, rapidly and precisely expand a developer's knowledge as needed, navigate unfamiliar APIs without relying on reference, build out initial features, do code review and so much more.

Even if code is the right medium for specifying a program, transformers act as an automated interface between that medium and natural language. Modern high-end transformers have no problem producing code, while benefiting from a wealth of knowledge that far surpasses any individual.

> Most people selling AI are predicting the obsolescence of human authors.

It's entirely possible that we do become obsolete for a wide variety of programming domains. That's simply a reality, just as weavers saw massive layoffs in the wake of the automated loom, or scribes lost work after the printing press, or human calculators became pointless after high-precision calculators became commonplace.

This replacement might not happen tomorrow, or next year, or even in the next decade, but it's clear that we are able to build capable models. What remains to be done is R&D around things like hallucinations, accuracy, affordability, etc. as well as tooling and infrastructure built around this new paradigm. But the cat's out of the bag, and we are not returning to a paradigm that doesn't involve intelligent automation in our daily work; programming is literally about automating things and transformers are a massive forward step.

That doesn't really mean anything, though; You can still be as involved in your programming work as you'd like. Whether you can find paid, professional work depends on your domain, skill level and compensation preferences. But you can always program for fun or personal projects, and decide how much or how little automation you use. But I will recommend that you take these tools seriously, and that you aren't too dismissive, or you could find yourself left behind in a rapidly evolving landscape, similarly to the advent of personal computing and the internet.

replies(5): >>44361398 #>>44361531 #>>44361698 #>>44362804 #>>44363434 #
nitwit005 ◴[] No.44361698[source]
I don't disagree exactly, but the AI that fully replaces all the programmers is essentially a superhuman one. It's matching human output, but will obviously be able to do some tasks like calculations much faster, and won't need a lunch break.

At that point it's less "programmers will be out of work" as "most work may cease to exist".

replies(1): >>44363250 #
1. throw234234234 ◴[] No.44363250{3}[source]
Not sure about this. Coding has some unique characteristics that may it easier even if from a human perspective it requires some skill:

- The cost of failure is low: Most domains (physical, compliance, etc) don't have this luxury where the cost of failure is high and so the validator has more value.

- The cost to retry/do multiple simulations is low: You can perform many experiments at once, and pick the one with the best results. If the AI hallucinates, or generates something that doesn't work the agent/tool could take that error and simulate/do multiple high probability tries until it passes. Things like unit tests, compiler errors, etc make this easier.

- There are many right answers to a problem. Good enough software is good enough for many domains (e.g. a CRUD web app). Not all software is like this but many domains in software are.

What makes something hard to disrupt won't be intellectual difficulty (e.g. software harder than compliance analyst as a made up example), it will be other bottlenecks like the physical world (energy, material costs, etc), regulation (job isn't entirely about utility/output). etc.

replies(1): >>44374840 #
2. weatherlite ◴[] No.44374840[source]
> The cost of failure is low: Most domains (physical, compliance, etc) don't have this luxury where the cost of failure is high and so the validator has more value.

This is not entirely sensible, some code touches the physical / compliance world. Airports, airplanes, hospitals, cranes, water systems, military they all use code to different degrees. It's true that they can perhaps afford to run experiments over landing pages, but I don't think they can simply disrupt their workers and clients on a regular basis.

replies(1): >>44384270 #
3. throw234234234 ◴[] No.44384270[source]
I did say "not all software is like this, but many domains are". So I agree with you.

Also note unlike say for physical domains where it's expensive to "tear down" until you commit and deploy (i.e. while the code is being worked on) you can try/iterate/refine via your IDE, shell, whatever. Its just text files after all; in the end you are accountable for the final verification step before it is published. I never said we don't need a verification step; or a gate before it goes to production systems. I'm saying its easier to throw away "hallucinations" that don't work and you can work around gaps in the model with iterations/retries/multiple versions until the user is happy with it.

Conversely I couldn't have an AI build a house, I don't like it, it demolishes it and builds a slightly different one, etc etc until I say "I'm happy with this product, please proceed". The sheer amount of resource waste and time spent in doing so would be enormous. I can simulate, generate plans, etc maybe with AI but nothing beats seeing the "physical thing" for some products especially when there isn't the budget/resources to "retry/change".

TL;DR the greater the cost of iteration/failure; the less likely you can use iteration to cover up gaps in your statistical model (i.e. tail risks are more likely to bite/harder to mitigate).