We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.
We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.
It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.
How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.
An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.
There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.
Don't expect them to show mastery of spatial reasoning or agentic behavior or physical dexterity out of the box.
They still capture enough humanlike behavior to yield the most general AI systems ever built.