←back to thread

111 points ewf | 2 comments | | HN request time: 0.606s | source
Show context
rishi_rt ◴[] No.45305201[source]
Aravind Narayanan seems to be the only guy qualified enough to be called an expert.
replies(3): >>45305738 #>>45305888 #>>45306515 #
dang ◴[] No.45305738[source]
HN's own https://news.ycombinator.com/user?id=randomwalker!
replies(1): >>45306256 #
randomwalker ◴[] No.45306256[source]
Thanks! HN was part of the origin story of the book in question.

In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.

It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.

While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.

replies(3): >>45306488 #>>45307338 #>>45307580 #
1. NooneAtAll3 ◴[] No.45307580[source]
what does irreducible error mean?
replies(2): >>45307988 #>>45309779 #
2. mikepalmer ◴[] No.45309779[source]
And how do you know it's irreducible? In the sense of knowing there's no short program to describe it (Kolmogorov style).