←back to thread

Human

(quarter--mile.com)
717 points surprisetalk | 1 comments | | HN request time: 0.204s | source
Show context
dan-robertson ◴[] No.43994459[source]
Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
replies(6): >>43994589 #>>43994685 #>>43994741 #>>43994893 #>>43995446 #>>43996662 #
mr_toad ◴[] No.43994741[source]
Sci Fi is quite ridiculous when it describes a cold logically machine, and then on the next page describes its malign intentions. Pick a lane.
replies(3): >>43994808 #>>43995563 #>>44007061 #
1. goatlover ◴[] No.43994808[source]
Asimov's 3 laws of robotics worked well to tell stories of how those laws were inadequate logic, and the need for a zeroeth law. Humans came up with the 3 inadequate laws that seemed logical on the surface, but a machine developed the zeroeth in response to those inadequacies.