Was good while it lasted though.
Was good while it lasted though.
Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.
In the meantime, memes aside, I'm not too worried about being completely automated away.
These models are extremely unreliable when unsupervised.
It doesn't feel like that will change fundamentally with just incrementally better training.
> It doesn't feel like that will change fundamentally with just incrementally better training.
I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.
Also, LLMs don't need to be better programmers than you are, they only need to be good enough.
There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.
I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".
Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.
A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.
I'm arguing that the category of the problem matters a lot.
Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".
Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.
Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.
Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.
Well yes , now we know they make kids kill themselves.
I think we've all fooled ourselves like this beetle
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
for thousands of years up until 2020 anything that conversed with us could safely be assumed to be another sentient/intelligent being.
No we have something that does that, but is neither sentient or intelligent, just a (complex)deterministic mechanism.