←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.239s | source
Show context
boshalfoshal ◴[] No.44361280[source]
Imo this is a misunderstanding of what AI companies want AI tools to be and where the industry is heading in the near future. The endgame for many companies is SWE automation, not augmentation.

To expand -

1. Models "reason" and can increasingly generate code given natural language. Its not just fancy autocomplete, its like having an intern - mid level engineer at your beck and call to implement some feature. Natural language is generally sufficient enough when I interact with other engineers, why is it not sufficient for an AI, which (in the limit), approaches an actual human engineer?

2. Business wise, companies will not settle for augmentation. Software companies pay tons of money in headcount, its probably most mid-sized companies top or second line item. The endgame for leadership at these companies is to do more with less. This necessitates automation (in addition to augmenting the remaining roles).

People need to stop thinking of LLMs as "autocomplete on steroids" and actually start thinking of them as a "24/7 junior SWE who doesn't need to eat or sleep and can do small tasks at 90% accuracy with some reasonable spec." Yeah you'll need to edit their code once in a while but they also get better and cost less than an actual person.

replies(5): >>44361328 #>>44361830 #>>44362902 #>>44362926 #>>44375104 #
1. bcrosby95 ◴[] No.44362902[source]
This sounds exactly like the late '90s all over again. All the programming jobs were going to be outsourced to other countries and you'd be lucky to make minimum wage.

And then the last 25 years happened.

Now people are predicting the same thing will happen, but with AI.

The problem then, as is now, is not that coding is hard, it's that people don't know what the hell they actually want.