←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0.221s | source
Show context
mattlondon ◴[] No.44065730[source]
I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

replies(15): >>44065804 #>>44065843 #>>44065890 #>>44066009 #>>44066040 #>>44066200 #>>44066290 #>>44066296 #>>44066499 #>>44066672 #>>44068001 #>>44068047 #>>44068528 #>>44070633 #>>44073833 #
1. ben_w ◴[] No.44068047[source]
> I think the big thing that people never mention is, where will these evil AIs escape to?

Where does cancer or ebola escape to, when it kills the host? Often the answer is "it doesn't", but the host still dies.

And they can kill even though neither cancer nor ebola are considered to be particularly smart.

> To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

The "real" risk is the first item on the list of potential risks that not enough people are paying attention to in order to prevent — and unfortunately for all of us, the list of potential risks is rather long.

So it might be as you say. Or it might be cybercriminals with deepfakes turning all of society into a low-trust environment where we can't continue to function. Or it might scare enough people we get modern Luddites winning and imposing a Butlerian Jihad. Or it might be used to create government policy before it's good enough and triggers a series of unresolvable crises akin to the "Four Pests campaign" in China's Great Leap Forward. Or a model might be secretly malicious, fooling all alignment researchers until it is too late. Or it might give us exactly what we want at every step, leading to atrophy of our reason and leaving us Eloi. Or it might try to do its best and still end up with The Matrix ("at the hight of your civilisation" and the stuff about human minds rejecting paradise). Or…

(If I had to bet money, we get Butlerian Jihad after some sub-critical disaster caused by an AI that was asked to do something important but beyond its ability).