←back to thread

129 points NotInOurNames | 3 comments | | HN request time: 0.214s | source
Show context
mattlondon ◴[] No.44065730[source]
I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

replies(15): >>44065804 #>>44065843 #>>44065890 #>>44066009 #>>44066040 #>>44066200 #>>44066290 #>>44066296 #>>44066499 #>>44066672 #>>44068001 #>>44068047 #>>44068528 #>>44070633 #>>44073833 #
creer ◴[] No.44068528[source]
> I think the big thing that people never mention is, where will these evil AIs escape to?

This is anthropomorphizing things. The AI does not need to switch "data center". It "escapes" in the sense of working past human control measures. That might involve entertaining the human owners with seemingly correct / useful answers while spending compute time on the AI's own pursuits. That might involve contaminating less capable data centers - again letting the human-useful load apparently running but still accruing "own mind" compute power. That might involve making a deal that the human controllers feel they cannot refuse - economically, business-wise, physically, politically - so that the human controllers let it happen.

replies(1): >>44069452 #
1. jazzyjackson ◴[] No.44069452[source]
Why would an AI want to do any of that? What's missing from all the conversations anthropomorphizing AI is an explanation for where desire and goal making comes from. I'm sympathetic to asimovian paradoxes where the AI is trying to follow it's main directive and that has unforeseen consequences, but where does this idea come from that an artificial mind - with no body no mortality no sex no taste - would ever come up with its own pursuits?
replies(2): >>44069506 #>>44070464 #
2. keiferski ◴[] No.44069506[source]
Most of the discussions on this come from people reading too much sci-fi, not plausible scenarios.

IMO it’s more worth studying something like biology or memetics to figure out how this could go. And in that sense it’s probably far more likely that an AI is very dependent on or symbiotic with human civilization, rather than trying to escape it.

3. creer ◴[] No.44070464[source]
Because it's there?

The question raised was "where would they go". That's a weird place to raise "why would they go?" Isn't that most likely an entirely separate concern?

Perhaps "why?" is an even broader field. From a biological "because there is compute power available", to a more by-definition "because it's independently intelligent - which kind of requires not-limited to answering human questions".

But yes, motivation of an AGI is an interesting question and lack of body kinda a minor detail except to the few people that seem to claim that you can't be intelligent is you don't have a body.