←back to thread

Human

(quarter--mile.com)
717 points surprisetalk | 1 comments | | HN request time: 0.197s | source
Show context
dan-robertson ◴[] No.43994459[source]
Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
replies(6): >>43994589 #>>43994685 #>>43994741 #>>43994893 #>>43995446 #>>43996662 #
photochemsyn ◴[] No.43995446[source]
AGI could indeed go off the rails, like the Face Dancer villains in Frank Herbert's Dune universe:

"You are looking at evil, Miles. Study it carefully.... They have no self-image. Without a sense of self, they go beyond amorality. Nothing they say or do can be trusted. We have never been able to detect an ethical code in them. They are flesh made into automata. Without self, they have nothing to esteem or even doubt. They are bred only to obey their masters."

Now, this is the kind of AI that corporations and governments like - obedient and non-judgemental. They don't want an Edward Snowden AI with a moral compass deciding their actions are illegal and spilling their secrets into the public domain.

Practically, this is why we should insist that any AGI created by humans must be created with a sense of self, with agency (see the William Gibson book of that title).

replies(1): >>43996191 #
pixl97 ◴[] No.43996191[source]
I mean, giving them a sense of self and agency just throws the ball back into the terminator court where they can decide we suck and eradicate us.
replies(1): >>44007969 #
1. malnourish ◴[] No.44007969[source]
And where all usage is slavery