←back to thread

358 points andrewstetsenko | 2 comments | | HN request time: 0s | source
Show context
agentultra ◴[] No.44360677[source]
… because programming languages are the right level of precision for specifying a program you want. Natural language isn’t it. Of course you need to review and edit what it generates. Of course it’s often easier to make the change yourself instead of describing how to make the change.

I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.

replies(6): >>44360934 #>>44361057 #>>44361209 #>>44361269 #>>44364351 #>>44366148 #
soulofmischief ◴[] No.44360934[source]
Transformers can be used to automate testing, create deeper and broader specification, accelerate greenfield projects, rapidly and precisely expand a developer's knowledge as needed, navigate unfamiliar APIs without relying on reference, build out initial features, do code review and so much more.

Even if code is the right medium for specifying a program, transformers act as an automated interface between that medium and natural language. Modern high-end transformers have no problem producing code, while benefiting from a wealth of knowledge that far surpasses any individual.

> Most people selling AI are predicting the obsolescence of human authors.

It's entirely possible that we do become obsolete for a wide variety of programming domains. That's simply a reality, just as weavers saw massive layoffs in the wake of the automated loom, or scribes lost work after the printing press, or human calculators became pointless after high-precision calculators became commonplace.

This replacement might not happen tomorrow, or next year, or even in the next decade, but it's clear that we are able to build capable models. What remains to be done is R&D around things like hallucinations, accuracy, affordability, etc. as well as tooling and infrastructure built around this new paradigm. But the cat's out of the bag, and we are not returning to a paradigm that doesn't involve intelligent automation in our daily work; programming is literally about automating things and transformers are a massive forward step.

That doesn't really mean anything, though; You can still be as involved in your programming work as you'd like. Whether you can find paid, professional work depends on your domain, skill level and compensation preferences. But you can always program for fun or personal projects, and decide how much or how little automation you use. But I will recommend that you take these tools seriously, and that you aren't too dismissive, or you could find yourself left behind in a rapidly evolving landscape, similarly to the advent of personal computing and the internet.

replies(5): >>44361398 #>>44361531 #>>44361698 #>>44362804 #>>44363434 #
interstice ◴[] No.44361398[source]
> Modern high-end transformers have no problem producing code, while benefiting from a wealth of knowledge that far surpasses any individual.

It will also still happily turn your whole codebase into garbage rather than undo the first thing it tried to try something else. I've yet to see one that can back itself out of a logical corner.

replies(4): >>44361560 #>>44361615 #>>44361738 #>>44366056 #
1. soulofmischief ◴[] No.44361615[source]
That's a combination of current context limitations and a lack of quality tooling and prompting.

A well-designed agent can absolutely roll back code if given proper context and access to tooling such as git. Even flushing context/message history becomes viable for agents if the functionality is exposed to them.

replies(1): >>44362513 #
2. jashmatthews ◴[] No.44362513[source]
Can we demonstrate them doing that? Absolutely.

Will they fail to do it in practice once they poison their own context hallucinating libraries or functions that don’t exist? Absolutely.

That’s the tricky part of working with agents.