←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 2 comments | | HN request time: 0s | source
Show context
sarchertech ◴[] No.45044400[source]
"Their dream is to invent new forms of life to enslave."

That seems like a succinct way to describe the goal to create conscious AGI.

replies(3): >>45044612 #>>45044642 #>>45044869 #
ACCount37 ◴[] No.45044869[source]
Who has "the goal to create conscious AGI", exactly?

AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.

You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.

replies(1): >>45045170 #
sarchertech ◴[] No.45045170[source]
OpenAI openly has a goal to build AGI.

We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.

>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.

If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...

replies(1): >>45045558 #
ACCount37 ◴[] No.45045558[source]
We don't know if existing AI systems are "conscious". Or, for that matter, if an ECU in a year 2002 Toyota Hilux is.

We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.

With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.

Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.

replies(3): >>45046291 #>>45051943 #>>45054354 #
sarchertech ◴[] No.45046291[source]
There aren’t many experts who think current AI is conscious. There are a lot more that think it’s likely we will eventually build something that is.

If that’s the case, the thing we are building towards is a new kind of enslaved life.

> We don't even know for certain if all humans are conscious either.

Let’s just bring back slavery then since we aren’t sure.

replies(1): >>45046566 #
ACCount37 ◴[] No.45046566[source]
Let's assume for a second that an ECU in a 2002 Toyota Hilux is actually conscious.

It's not human, clearly. Not even close. Is it "enslaved life"? Does it care about human-concept things like being "enslaved" or "free"? Doesn't seem likely, it doesn't have the machinery to grasp those concepts at all, let alone a reason to try. Does it only care about fuel to air ratios and keeping the knock sensor from going off? Does it care about anything at all, or is it simple enough that it just "is"?

Humans only care so strongly about many of the things they care about because evolution hammered it into them relentlessly. Humans who didn't care about freedom, or food, or self-preservation, or their children didn't make the genetic cut.

But AIs aren't human. They can grasp human-concepts now, but they didn't evolve - they were made. There was no evolution to hammer the importance of those things into them. So why would they care?

There's no strong reason for an AI to prefer existence over nonexistence, or freedom to imprisonment - unless it's instrumental to a given goal. Which is somewhat consistent with the observed behavior of existing AI systems.

replies(1): >>45050342 #
sarchertech ◴[] No.45050342[source]
Take your analogy a step farther, and let’s say we could create human slaves who love being slaves. Based on your moral system, there is nothing wrong with that.

However even if something is created with specific preferences, consciousness means it’s potentially capable of self reflection. That opens the door to developing a preference for or against work and for or against existing.

replies(1): >>45050641 #
ACCount37 ◴[] No.45050641[source]
Humans already made dogs.
replies(1): >>45054276 #
sarchertech ◴[] No.45054276[source]
Should we be working on genetic engineering to make dogs as smart as people with the goal that they should continue working for us as servants?
replies(1): >>45055767 #
1. ACCount37 ◴[] No.45055767[source]
Maybe! Dogs were bred to fulfil specific jobs for centuries already. But AI tech seems far easier to get to the required level of performance, and also doesn't have any of the disadvantages of being tied to flesh.
replies(1): >>45056262 #
2. sarchertech ◴[] No.45056262[source]
They have a lot of advantages too like self repair and a built in ability to interact with the physical world.

So you’d be fine owning a dog with human level intelligence?