> A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself
> After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists
In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.
It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.
While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.
Few aspects of daily life require computers...They're
irrelevant to cooking, driving, visiting, negotiating,
eating, hiking, dancing, speaking, and gossiping. You
don't need a computer to...recite a poem or say a
prayer." Computers can't, Stoll claims, provide a richer
or better life.
(excerpted from the Amazon summary at https://www.amazon.com/Silicon-Snake-Oil-Thoughts-Informatio... ).So, was this something that you guys were conscious of when you chose your own book's title? How well have you future-proofed your central thesis?
> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.
> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.
https://en.wikipedia.org/wiki/Hubert_Dreyfus's_views_on_arti...
Our more recent essay (and ongoing book project) "AI as Normal Technology" is about our vision of AI impacts over a longer timescale than "AI Snake Oil" looks at https://www.normaltech.ai/p/ai-as-normal-technology
I would categorize our views as techno-optimist, but people understand that term in many different ways, so you be the judge.
Sounds like a job for the community! Maybe someone will track it down...
Edit: I tried something like https://hn.algolia.com/?dateEnd=1577836800&dateRange=custom&... (note the custom date range) but didn't find anything that quite matches your description.
This was from 2017, and it made such an impression on me that I could find it on my first search attempt!