In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.
It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.
While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.
Few aspects of daily life require computers...They're
irrelevant to cooking, driving, visiting, negotiating,
eating, hiking, dancing, speaking, and gossiping. You
don't need a computer to...recite a poem or say a
prayer." Computers can't, Stoll claims, provide a richer
or better life.
(excerpted from the Amazon summary at https://www.amazon.com/Silicon-Snake-Oil-Thoughts-Informatio... ).So, was this something that you guys were conscious of when you chose your own book's title? How well have you future-proofed your central thesis?
Our more recent essay (and ongoing book project) "AI as Normal Technology" is about our vision of AI impacts over a longer timescale than "AI Snake Oil" looks at https://www.normaltech.ai/p/ai-as-normal-technology
I would categorize our views as techno-optimist, but people understand that term in many different ways, so you be the judge.