That seems like a succinct way to describe the goal to create conscious AGI.
That seems like a succinct way to describe the goal to create conscious AGI.
AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.
You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.
We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.
>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.
If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...
We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.
With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.
Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.