←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 2 comments | | HN request time: 0.001s | source
Show context
sarchertech ◴[] No.45044400[source]
"Their dream is to invent new forms of life to enslave."

That seems like a succinct way to describe the goal to create conscious AGI.

replies(3): >>45044612 #>>45044642 #>>45044869 #
ACCount37 ◴[] No.45044869[source]
Who has "the goal to create conscious AGI", exactly?

AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.

You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.

replies(1): >>45045170 #
sarchertech ◴[] No.45045170[source]
OpenAI openly has a goal to build AGI.

We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.

>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.

If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...

replies(1): >>45045558 #
ACCount37 ◴[] No.45045558[source]
We don't know if existing AI systems are "conscious". Or, for that matter, if an ECU in a year 2002 Toyota Hilux is.

We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.

With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.

Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.

replies(3): >>45046291 #>>45051943 #>>45054354 #
1. rsynnott ◴[] No.45051943{3}[source]
Unless you're defining 'consciousness' so broadly that you consider it an open question whether a parsnip is conscious, yeah, no, we do kinda know. They're not.
replies(1): >>45053896 #
2. ACCount37 ◴[] No.45053896[source]
If a parsnip was conscious, how would we know?

Conversely: how do we know that it isn't?