I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.
The issue with natural language isn’t that it’s impossible to be precise, it’s that most people aren’t, or they are precise about what they want it to do for them, but not what the computer needs to do to make it happen. This leads to a lot of guessing by engineerings as they try to translate the business requirements into code. Now the LLM is doing that guessing, often with less context about the broader business objectives, or an understanding of the people writing those requirements.
No.
Some were concerned that the output of compilers couldn’t match the quality of what could be done by a competent programmer at the time. That was true for a time. Then compilers got better.
Nobody was concerned that compilers were going to be used by capitalists to lay them off and seize the means of producing programs by turning it into property.