←back to thread

Human

(quarter--mile.com)
717 points surprisetalk | 3 comments | | HN request time: 0s | source
Show context
dan-robertson ◴[] No.43994459[source]
Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
replies(6): >>43994589 #>>43994685 #>>43994741 #>>43994893 #>>43995446 #>>43996662 #
1. alnwlsn ◴[] No.43994685[source]
I've listened to some old sci-fi radio shows and it's interesting how often "the computer never makes a mistake" comes up. Which is usually followed by the computer making a mistake.
replies(2): >>43995166 #>>43997308 #
2. amoshebb ◴[] No.43995166[source]
AI is usually just a 20th/21st century “icarus wax wings” or sometimes “monkeys paw”. Re-masters of a “watch out for unintended consequences” fable that almost certainly predates written text.
3. BizarroLand ◴[] No.43997308[source]
That's why the term "Garbage In, Garbage Out" exists.

In any non-edge case (that is, where the system is operating in ideal conditions and no flaw or bug, known or unknown, exists in the system), a verifiably functioning computer will produce the exact same results for any process every time.

If the computer does not do what you expected it to do and spits out garbage, then you gave it garbage data.