Most active commenters

    ←back to thread

    170 points bookofjoe | 17 comments | | HN request time: 1.195s | source | bottom
    1. aszantu ◴[] No.43646247[source]
    Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
    replies(7): >>43646403 #>>43646644 #>>43646735 #>>43647271 #>>43647356 #>>43649101 #>>43649246 #
    2. bell-cot ◴[] No.43646403[source]
    Guess: https://en.wikipedia.org/wiki/Liar!_(short_story)
    3. soulofmischief ◴[] No.43646644[source]
    That is still one of my favorite stories of all time. It really sticks to you. It's part of the I, Robot anthology.
    4. nitwit005 ◴[] No.43646735[source]
    I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.
    replies(3): >>43647049 #>>43647189 #>>43647434 #
    5. nthingtohide ◴[] No.43647049[source]
    Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.

    Family Guy Nasty Wolf Pack

    https://youtu.be/5oW9mNbMbmY

    The perfect wish to outsmart a genie | Chris & Jack

    https://youtu.be/lM0teS7PFMo

    6. buzzy_hacker ◴[] No.43647189[source]
    Same here. A main point of I, Robot was to show why the three laws don't work.
    replies(2): >>43647435 #>>43675372 #
    7. hinkley ◴[] No.43647271[source]
    And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.
    replies(1): >>43647445 #
    8. creer ◴[] No.43647356[source]
    Good conceit or theme by an author - on which to base a series of books that will sell? Not everything is an engineering or math project.
    9. pfisch ◴[] No.43647434[source]
    I mean, now we call the three laws "alignment", but it honestly seems inevitable that it will go wrong eventually.

    That of course isn't stopping us from marching forwards though in the name of progress.

    10. cogman10 ◴[] No.43647435{3}[source]
    I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.

    In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.

    For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.

    Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).

    replies(1): >>43648164 #
    11. pfisch ◴[] No.43647445[source]
    Isn't that the plot of westworld season 3?
    replies(1): >>43647846 #
    12. hinkley ◴[] No.43647846{3}[source]
    I think better than half the writers on Westworld were not born yet when the OG Foundation books were written.
    13. tedunangst ◴[] No.43648164{4}[source]
    That's certainly not the plot of Little Lost Robot.
    replies(1): >>43648362 #
    14. cogman10 ◴[] No.43648362{5}[source]
    Little lost robot was about a robot with the first law modified. That's not about the law failing but rather failing to install the full law.
    15. kagakuninja ◴[] No.43649101[source]
    In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.

    >A robot may not harm humanity, or, by inaction, allow humanity to come to harm

    Therefore a robot could allow some humans to die, if the 0th law took precedence.

    16. nix-zarathustra ◴[] No.43649246[source]
    >he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.

    IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.

    17. aszantu ◴[] No.43675372{3}[source]
    The story from iRobot is one of Asimov s stories and it works exactly as intended. The AI figured that to keep humans safe you have to put them in cages. Humans will always fight over something