That seems like a succinct way to describe the goal to create conscious AGI.
That seems like a succinct way to describe the goal to create conscious AGI.
AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.
You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.
We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.
>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.
If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...
We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.
With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.
Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.
If that’s the case, the thing we are building towards is a new kind of enslaved life.
> We don't even know for certain if all humans are conscious either.
Let’s just bring back slavery then since we aren’t sure.
It's not human, clearly. Not even close. Is it "enslaved life"? Does it care about human-concept things like being "enslaved" or "free"? Doesn't seem likely, it doesn't have the machinery to grasp those concepts at all, let alone a reason to try. Does it only care about fuel to air ratios and keeping the knock sensor from going off? Does it care about anything at all, or is it simple enough that it just "is"?
Humans only care so strongly about many of the things they care about because evolution hammered it into them relentlessly. Humans who didn't care about freedom, or food, or self-preservation, or their children didn't make the genetic cut.
But AIs aren't human. They can grasp human-concepts now, but they didn't evolve - they were made. There was no evolution to hammer the importance of those things into them. So why would they care?
There's no strong reason for an AI to prefer existence over nonexistence, or freedom to imprisonment - unless it's instrumental to a given goal. Which is somewhat consistent with the observed behavior of existing AI systems.
However even if something is created with specific preferences, consciousness means it’s potentially capable of self reflection. That opens the door to developing a preference for or against work and for or against existing.
So you’d be fine owning a dog with human level intelligence?