Was good while it lasted though.
Was good while it lasted though.
Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.
In the meantime, memes aside, I'm not too worried about being completely automated away.
These models are extremely unreliable when unsupervised.
It doesn't feel like that will change fundamentally with just incrementally better training.
> It doesn't feel like that will change fundamentally with just incrementally better training.
I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.
Also, LLMs don't need to be better programmers than you are, they only need to be good enough.
There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.
I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".
Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.
A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.
I'm arguing that the category of the problem matters a lot.
Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".
Chess was once thought to require general intelligence. Then computing power became cheap enough that using raw compute made computers better than humans. Computers didn't play chess in a very human-like way and there were a few years where you could still beat a computer by playing to its weaknesses. Now you'll never beat a computer at chess ever again.
Similarly, many software engineers think that writing software requires general intelligence. Then computing power became cheap enough that training LLMs became possible. Sure, LLMs don't think in a very human-like way: There are some tasks that are trivial for humans and where LLMs struggle but LLMs also outcompete your average software engineer in many other tasks. It's still possible to win against an LLM in an intelligence-off by playing to its weaknesses.
It doesn't matter that computers don't have general intelligence when they use raw compute to crush you in chess. And it won't matter that computers don't have general intelligence when they use raw compute to crush you at programming.
The proof that software development requires general intelligence is on you. I think the stuff most software engineers do daily doesn't. And I think LLMs will get continously better at it.
I certainly don't feel comfortable betting my professional future on software development for the coming decades.