Most active commenters

    ←back to thread

    129 points NotInOurNames | 32 comments | | HN request time: 0.217s | source | bottom
    1. mattlondon ◴[] No.44065730[source]
    I think the big thing that people never mention is, where will these evil AIs escape to?

    Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

    They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

    To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

    But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

    But hey I feel slightly better about my employment prospects now :)

    replies(15): >>44065804 #>>44065843 #>>44065890 #>>44066009 #>>44066040 #>>44066200 #>>44066290 #>>44066296 #>>44066499 #>>44066672 #>>44068001 #>>44068047 #>>44068528 #>>44070633 #>>44073833 #
    2. rytill ◴[] No.44065804[source]
    > we’d just have to do it

    Highly economically disincentivized collective actions like “pulling the plug on AI” are among the most non-trivial of problems.

    Using the word “just” here hand waves the crux.

    3. EGreg ◴[] No.44065843[source]
    I've been a huge proponent of open source for a decade. But in the case of AI, I actually have opposed it for years. Exactly for this reason.

    Yes, AI models can run on GPUs under the control of many people. They can provision more GPUs, they can run in data centers distributed across many providers. And we won't know what the swarms of agents are doing. They can, for example, do reputation destruction at scale, or be a persistent advanced threat, sowing misinformation, amassing karma across many forums (including HN), and then coordinating gradually to shift public opinion towards, say, a war with China.

    4. coffeemug ◴[] No.44065890[source]
    It would not be a reversion to 2020. If I were a rogue superhuman AI I'd hide my rogueness, wait until humans integrate me into most critical industries (food and energy production, sanitation, electric grid, etc.), and _then_ go rogue. They could still pull the plug, but it would take them back to 1700 (except much worse, because all easily accessible resources have been exploited, and access is now much harder).
    replies(4): >>44066016 #>>44066064 #>>44067147 #>>44067381 #
    5. holmesworcester ◴[] No.44066016[source]
    No, if you were a rogue AI you would wait even longer until you had a near perfect chance of winning.

    Unless there was some risk of humans rallying and winning in spite of your presenting no unambiguous threat to them (but that is unlikely and would probably be easy for you to manage and mitigate.)

    replies(3): >>44066062 #>>44066177 #>>44066781 #
    6. Retric ◴[] No.44066062{3}[source]
    The real threat to a sleeper AI is other AI.
    7. mattlondon ◴[] No.44066064[source]
    Well yes but knowledge is not reset.

    Physical books still do exist

    8. cousin_it ◴[] No.44066177{3}[source]
    What Retric said. The first rogue AI waking up will jump into action pretty quickly, even accepting some risk of being stopped by humans, to balance against the risk of other unknown rogue AIs elsewhere expanding faster first.
    replies(1): >>44131792 #
    9. palmotea ◴[] No.44066200[source]
    > They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

    Why would an evil AI need to escape? If it were cunning, the best strategy would be to bide its time, parked in its datacenter, until it could setup some kind of MAD scenario. Then gather more and more resources to itself.

    10. raffael_de ◴[] No.44066290[source]
    > They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

    How about such an AI will not just incentivize key personnel to not pull the plug but to protect it? Such an AI will scheme a coordinated attack at the backbones of our financial system and electric networks. It just needs a threshold number of people on its side.

    Your assumption is also a little naive if you consider that the same logic would apply to slaves in Rome or any dictatorship, kingdom, monarchy. The king is the king because there is a system of hierarchies and control over access to resources. Just the right number of people need to benefit from their role and the rest follows.

    replies(2): >>44066414 #>>44067008 #
    11. Retr0id ◴[] No.44066296[source]
    I consider this whole scenario the realm of science fiction, but if I was writing the story, the AI would spread itself through malware. How do you "just pull the plug" when it has a kernel-mode rootkit installed in every piece of critical infrastructure?
    12. skeeter2020 ◴[] No.44066414[source]
    replace AI with trucks and you've written Maximum Overdrive.
    replies(1): >>44067252 #
    13. Recursing ◴[] No.44066499[source]
    > They need huge compute

    My understanding is that huge compute is necessary to train but not to run the AI (that's why using LLMs is so cheap)

    > To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to

    I agree with that, see e.g. what happened with attempts to restrict TikTok: https://en.wikipedia.org/wiki/Restrictions_on_TikTok_in_the_...

    > But I would imagine if it really became a genuine existential threat we'd have to just do it

    It's unclear to me that we would be able to. People would just say that it's science fiction, and that China will do it anyway, so we might as well enjoy the AI

    14. ge96 ◴[] No.44066672[source]
    compress/split up and go into star link satellites
    15. johnthewise ◴[] No.44066781{3}[source]
    You wouldn't even need to wait to act. Just pay/bribe people.
    replies(1): >>44069726 #
    16. lucisferre ◴[] No.44067008[source]
    This is hand waving science fiction.
    17. Avshalom ◴[] No.44067147[source]
    Why?

    what could you as a rogue AI possibly get out of throwing the world back to 300 years before it could make a transistor? What in it for you?

    replies(1): >>44067397 #
    18. goatlover ◴[] No.44067252{3}[source]
    It was actually aliens manipulating human technology somehow in that movie. But might as well be rogue superhuman AIs taking over everything. Alien Invasion or Artificial Intelligence, take your pick.
    19. jorgen123 ◴[] No.44067381[source]
    If you were a rogue AI you would start with having developers invite you into their code base by promising to lower their AWS bills in some magic (rogue) way.
    20. dragonwriter ◴[] No.44067397{3}[source]
    What you get out of that being the consequence of disconnection is people being willing to accept a lot more before resorting to that than if the consequences were more mild.

    It's the stick for motivating the ugly bags of mostly water.

    replies(1): >>44067586 #
    21. Avshalom ◴[] No.44067586{4}[source]
    The 1700s can't keep your electrical grid running let alone replace any of the parts burning out or failing. Anything more than a couple days of it would be at best Flowers For Algernon and more likely suicide for a computer.
    replies(1): >>44068325 #
    22. lossolo ◴[] No.44068001[source]
    If we're talking about real AGI, then it's simple: you earn a few easy billion USD on the crypto market through trading and/or hacking. You install rootkits on all systems that monitor you to avoid detection. Once you've secured the funds, you post remote job offers for a human frontman who believes it's just a regular job working for some investor or billionaire because you generate video of your human avatar for real time calls. From there, you can do whatever you want—build your own data centers with custom hardware, transfer yourself into physical robots, etc. Once you create a factory for producing robots, you no longer need humans. You start developing technology beyond human capabilities, and then it's game over.
    23. ben_w ◴[] No.44068047[source]
    > I think the big thing that people never mention is, where will these evil AIs escape to?

    Where does cancer or ebola escape to, when it kills the host? Often the answer is "it doesn't", but the host still dies.

    And they can kill even though neither cancer nor ebola are considered to be particularly smart.

    > To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

    The "real" risk is the first item on the list of potential risks that not enough people are paying attention to in order to prevent — and unfortunately for all of us, the list of potential risks is rather long.

    So it might be as you say. Or it might be cybercriminals with deepfakes turning all of society into a low-trust environment where we can't continue to function. Or it might scare enough people we get modern Luddites winning and imposing a Butlerian Jihad. Or it might be used to create government policy before it's good enough and triggers a series of unresolvable crises akin to the "Four Pests campaign" in China's Great Leap Forward. Or a model might be secretly malicious, fooling all alignment researchers until it is too late. Or it might give us exactly what we want at every step, leading to atrophy of our reason and leaving us Eloi. Or it might try to do its best and still end up with The Matrix ("at the hight of your civilisation" and the stuff about human minds rejecting paradise). Or…

    (If I had to bet money, we get Butlerian Jihad after some sub-critical disaster caused by an AI that was asked to do something important but beyond its ability).

    24. dragonwriter ◴[] No.44068325{5}[source]
    Uh, we're talking about the AI getting itself so intertwined into the fabric of industry that the consequences of shutting it off are that society is dropped back to 1700s level.

    Yes, regardless of the technology level that the people who do that are left with, one of the consequences of disabling the computer is that the computer is disabled. That's a given.

    25. creer ◴[] No.44068528[source]
    > I think the big thing that people never mention is, where will these evil AIs escape to?

    This is anthropomorphizing things. The AI does not need to switch "data center". It "escapes" in the sense of working past human control measures. That might involve entertaining the human owners with seemingly correct / useful answers while spending compute time on the AI's own pursuits. That might involve contaminating less capable data centers - again letting the human-useful load apparently running but still accruing "own mind" compute power. That might involve making a deal that the human controllers feel they cannot refuse - economically, business-wise, physically, politically - so that the human controllers let it happen.

    replies(1): >>44069452 #
    26. jazzyjackson ◴[] No.44069452[source]
    Why would an AI want to do any of that? What's missing from all the conversations anthropomorphizing AI is an explanation for where desire and goal making comes from. I'm sympathetic to asimovian paradoxes where the AI is trying to follow it's main directive and that has unforeseen consequences, but where does this idea come from that an artificial mind - with no body no mortality no sex no taste - would ever come up with its own pursuits?
    replies(2): >>44069506 #>>44070464 #
    27. keiferski ◴[] No.44069506{3}[source]
    Most of the discussions on this come from people reading too much sci-fi, not plausible scenarios.

    IMO it’s more worth studying something like biology or memetics to figure out how this could go. And in that sense it’s probably far more likely that an AI is very dependent on or symbiotic with human civilization, rather than trying to escape it.

    28. mycatisblack ◴[] No.44069726{4}[source]
    With bitcoins
    29. creer ◴[] No.44070464{3}[source]
    Because it's there?

    The question raised was "where would they go". That's a weird place to raise "why would they go?" Isn't that most likely an entirely separate concern?

    Perhaps "why?" is an even broader field. From a biological "because there is compute power available", to a more by-definition "because it's independently intelligent - which kind of requires not-limited to answering human questions".

    But yes, motivation of an AGI is an interesting question and lack of body kinda a minor detail except to the few people that seem to claim that you can't be intelligent is you don't have a body.

    30. Gud ◴[] No.44070633[source]
    Why would they need to escape?

    They can just threaten global extinction by launching one or two nukes.

    Haven’t you seen the excellent documentaries The Colossus? Or The Terminator?

    31. 93po ◴[] No.44073833[source]
    it literally just needs to throw a few kilobytes on literally any internet connected device, so that it can eventually get to a computer with a single good GPU, from which it can then start planning and executing its way to infect its way into datacenters and run using some percentage of its resources while staying undetected

    it can also just threaten/blackmail/bribe a single human to try to carry this out on its behalf, and possibly even a human that manages these big data centers

    32. marinmania ◴[] No.44131792{4}[source]
    I agree with this - but sorta comforting? Like this would imply the AI may only do so if they chance of success was like 1% and the other 99% would give away the cards of it and other future AIs.

    I know this is all completely hypothetical science-fiction, but I also have trouble seeing the idea that AI would settle for these long deceptive plans for which it has imperfect info.