←back to thread

129 points NotInOurNames | 2 comments | | HN request time: 0s | source
Show context
mattlondon ◴[] No.44065730[source]
I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

replies(15): >>44065804 #>>44065843 #>>44065890 #>>44066009 #>>44066040 #>>44066200 #>>44066290 #>>44066296 #>>44066499 #>>44066672 #>>44068001 #>>44068047 #>>44068528 #>>44070633 #>>44073833 #
coffeemug ◴[] No.44065890[source]
It would not be a reversion to 2020. If I were a rogue superhuman AI I'd hide my rogueness, wait until humans integrate me into most critical industries (food and energy production, sanitation, electric grid, etc.), and _then_ go rogue. They could still pull the plug, but it would take them back to 1700 (except much worse, because all easily accessible resources have been exploited, and access is now much harder).
replies(4): >>44066016 #>>44066064 #>>44067147 #>>44067381 #
holmesworcester ◴[] No.44066016[source]
No, if you were a rogue AI you would wait even longer until you had a near perfect chance of winning.

Unless there was some risk of humans rallying and winning in spite of your presenting no unambiguous threat to them (but that is unlikely and would probably be easy for you to manage and mitigate.)

replies(3): >>44066062 #>>44066177 #>>44066781 #
1. cousin_it ◴[] No.44066177[source]
What Retric said. The first rogue AI waking up will jump into action pretty quickly, even accepting some risk of being stopped by humans, to balance against the risk of other unknown rogue AIs elsewhere expanding faster first.
replies(1): >>44131792 #
2. marinmania ◴[] No.44131792[source]
I agree with this - but sorta comforting? Like this would imply the AI may only do so if they chance of success was like 1% and the other 99% would give away the cards of it and other future AIs.

I know this is all completely hypothetical science-fiction, but I also have trouble seeing the idea that AI would settle for these long deceptive plans for which it has imperfect info.