←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.227s | source
Show context
agentultra ◴[] No.44360677[source]
… because programming languages are the right level of precision for specifying a program you want. Natural language isn’t it. Of course you need to review and edit what it generates. Of course it’s often easier to make the change yourself instead of describing how to make the change.

I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.

replies(6): >>44360934 #>>44361057 #>>44361209 #>>44361269 #>>44364351 #>>44366148 #
JoeOfTexas ◴[] No.44361057[source]
Doesn't AI have diminishing returns on it's pseudo creativity? Throw all the training output of LLM into a circle. If all input comes from other LLM output, the circle never grows. Humans constantly step outside the circle.

Perhaps LLM can be modified to step outside the circle, but as of today, it would be akin to monkeys typing.

replies(1): >>44361660 #
svachalek ◴[] No.44361660[source]
I think you're either imagining the circle too small or overestimating how often humans step outside it. The typical programming job involves lots and lots of work, and yet none of it creating wholly original computer science. Current LLMs can customize well known UI/IO/CRUD/REST patterns with little difficulty, and these make up the vast majority of commercial software development.
replies(2): >>44362542 #>>44362835 #
1. crackalamoo ◴[] No.44362835[source]
I agree humans only rarely step outside the circle, but I do have this intuition that some people sometimes do, whereas LLMs never do. This distinction seems important over long time horizons when thinking about LLM vs human work.

But I can't quite articulate why I believe LLMs never step outside the circle, because they are seeded with some random noise via temperature. I could just be wrong.