←back to thread

Bayesian Statistics: The three cultures

(statmodeling.stat.columbia.edu)
309 points luu | 7 comments | | HN request time: 0.001s | source | bottom
Show context
thegginthesky ◴[] No.41080693[source]
I miss the college days where professors would argue endlessly on Bayesian vs Frequentist.

The article is very well succinct and even explains why even my Bayesian professors had different approaches to research and analysis. I never knew about the third camp, Pragmatic Bayes, but definitely is in line with a professor's research that was very through on probability fit and the many iteration to get the prior and joint PDF just right.

Andrew Gelman has a very cool talk "Andrew Gelman - Bayes, statistics, and reproducibility (Rutgers, Foundations of Probability)", which I highly recommend for many Data Scientists

replies(4): >>41080841 #>>41080979 #>>41080990 #>>41087094 #
RandomThoughts3 ◴[] No.41080979[source]
I’m always puzzled by this because while I come from a country where the frequentist approach generally dominates, the fight with Bayesian basically doesn’t exist. That’s just a bunch of mathematical theories and tools. Just use what’s useful.

I’m still convinced that Americans tend to dislike the frequentist view because it requires a stronger background in mathematics.

replies(7): >>41081068 #>>41081297 #>>41081328 #>>41081349 #>>41081566 #>>41081982 #>>41083467 #
1. runarberg ◴[] No.41081328[source]
I think the distaste Americans have to frequentists has much more to do with history of science. The Eugenics movement had a massive influence on science in America a and they used frequentist methods to justify (or rather validate) their scientific racism. Authors like Gould brought this up in the 1980s, particularly in relation to factor analysis and intelligence testing, and was kind of proven right when Hernstein and Murray published The Bell Curve in 1994.

The p-hacking exposures of the 1990s only fermented the notion that it is very easy to get away with junk science using frequentest methods to unjustly validate your claims.

That said, frequentists are still the default statistics in social sciences, which ironically is where the damage was the worst.

replies(2): >>41081714 #>>41082808 #
2. lupire ◴[] No.41081714[source]
What is the protection against someone using a Bayesian analysis but abusing it with hidden bias?
replies(2): >>41081839 #>>41081905 #
3. analog31 ◴[] No.41081839[source]
My knee jerk reaction is replication, and studying a problem from multiple angles such as experimentation and theory.
4. runarberg ◴[] No.41081905[source]
I’m sure there are creative ways to misuse bayesian statistics, although I think it is harder to hide your intentions as you do that. With frequentist approaches your intentions become obscure in the whole mess of computations and at the end of it you get to claim this is a simple “objective” truth because the p value shows < 0.05. In bayesan statistics the data you put into it is front and center: The chances of my theory being true given this data is greater than 95% (or was it chances of getting this data given my theory?). In reality most hoaxes and junk science was because of bad data which didn’t get scrutinized until much too late (this is what Gould did).

But I think the crux of the matter is that bad science has been demonstrated with frequentists and is now a part of our history. So people must either find a way to fix the frequentist approaches or throw it out for something different. Bayesian statistics is that something different.

replies(1): >>41084891 #
5. TeaBrain ◴[] No.41082808[source]
I don't think the guy's basic assertion is true that frequentist statistics is less favored in American academia.
replies(1): >>41083165 #
6. runarberg ◴[] No.41083165[source]
I’m not actually in any statistician circles (although I did work at a statistical startup that used Kalman Filters in Reykjavík 10 years ago; and I did dropout from learning statistics in University of Iceland).

But what I gathered after moving to Seattle is that Bayesian statistics are a lot more trendy (accepted even) here west of the ocean. Frequentists is very much the default, especially in hypothesis testing, so you are not wrong. However I’m seeing a lot more Bayesian advocacy over here than I did back in Iceland. So I’m not sure my parent is wrong either, that Americans tend to dislike frequentist methods, at least more than Europeans do.

7. lottin ◴[] No.41084891{3}[source]
> "The chances of my theory being true given this data is greater than 95% (or was it chances of getting this data given my theory?)"

The first statement assumes that parameters (i.e. a state of nature) are random variables. That's the Bayesan approach. The second statement assumes that parameters are fixed values, not random, but unknown. That's the frequentist approach.