←back to thread

111 points ewf | 3 comments | | HN request time: 0.199s | source
Show context
rishi_rt ◴[] No.45305201[source]
Aravind Narayanan seems to be the only guy qualified enough to be called an expert.
replies(3): >>45305738 #>>45305888 #>>45306515 #
1. eco ◴[] No.45305888[source]
That's one of the things that drives me nuts about all the public discourse about AI and our future. The vast majority of words written/spoken on the subject are by generic "thought leaders" who really have no greater understanding of AI than anyone else who uses it regularly.
replies(2): >>45305964 #>>45306922 #
2. libraryofbabel ◴[] No.45305964[source]
And the article agrees with you, and is pretty scathing about all the books except Narayanan’s (which is also the only book with a balanced anti-hype perspective):

> A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself

> After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists

3. mmaia ◴[] No.45306922[source]
A characteristic of the field since the beginning. Reading What Computers Can't Do in college (early 2000s) was an important contrast for me.

> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.

> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.

https://en.wikipedia.org/wiki/Hubert_Dreyfus's_views_on_arti...