←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0.496s | source
Show context
mattlondon ◴[] No.44065730[source]
I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

replies(15): >>44065804 #>>44065843 #>>44065890 #>>44066009 #>>44066040 #>>44066200 #>>44066290 #>>44066296 #>>44066499 #>>44066672 #>>44068001 #>>44068047 #>>44068528 #>>44070633 #>>44073833 #
1. Recursing ◴[] No.44066499[source]
> They need huge compute

My understanding is that huge compute is necessary to train but not to run the AI (that's why using LLMs is so cheap)

> To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to

I agree with that, see e.g. what happened with attempts to restrict TikTok: https://en.wikipedia.org/wiki/Restrictions_on_TikTok_in_the_...

> But I would imagine if it really became a genuine existential threat we'd have to just do it

It's unclear to me that we would be able to. People would just say that it's science fiction, and that China will do it anyway, so we might as well enjoy the AI