←back to thread

Human

(quarter--mile.com)
717 points surprisetalk | 1 comments | | HN request time: 0.266s | source
Show context
dan-robertson ◴[] No.43994459[source]
Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
replies(6): >>43994589 #>>43994685 #>>43994741 #>>43994893 #>>43995446 #>>43996662 #
1. echelon ◴[] No.43994893[source]
Some of us posted several comments here [1] and here [2] about where this could all be going if we lean into sci-fi imagining.

[1] https://news.ycombinator.com/item?id=43992151

[2] https://news.ycombinator.com/item?id=43991997