We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.
We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.
It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.
How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.
An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.
There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.
This has been demonstrated so many times.
They don’t make mistakes. It doesn’t make any sense to claim they do because their goal is simply to produce a statistically likely output. Whether or not that output is correct outside of their universe is not relevant.
What you’re doing is anthropomorphizing them and then trying to explain your observations in that context. The problem is that doesn’t make any sense.
Those are real examples of the kind of thing that can be found in modern production grade AIs. Not "anthropomorphizing" means not understanding how modern AI operates at all.
You've clearly read a lot of social media content about AI, but have you ever read any philosophy?
Anything that actually works and is in any way useful is removed from philosophy and gets its own field. So philosophy is left as, largely, a collection of curios and failures.
Also, I would advise you to never discuss philosophy with an LLM. It might be a legitimate cognitohazard.
Not to mention the effect of formal logic in computer science
If you don't have anything measurable, you don't have anything at all. And philosophy doesn't deal in measurables.
You're not being serious.