←back to thread

Bayesian Statistics: The three cultures

(statmodeling.stat.columbia.edu)
309 points luu | 2 comments | | HN request time: 0s | source
Show context
thegginthesky ◴[] No.41080693[source]
I miss the college days where professors would argue endlessly on Bayesian vs Frequentist.

The article is very well succinct and even explains why even my Bayesian professors had different approaches to research and analysis. I never knew about the third camp, Pragmatic Bayes, but definitely is in line with a professor's research that was very through on probability fit and the many iteration to get the prior and joint PDF just right.

Andrew Gelman has a very cool talk "Andrew Gelman - Bayes, statistics, and reproducibility (Rutgers, Foundations of Probability)", which I highly recommend for many Data Scientists

replies(4): >>41080841 #>>41080979 #>>41080990 #>>41087094 #
spootze ◴[] No.41080841[source]
Regarding the frequentist vs bayesian debates, my slightly provocative take on these three cultures is

- subjective Bayes is the strawman that frequentist academics like to attack

- objective Bayes is a naive self-image that many Bayesian academics tend to possess

- pragmatic Bayes is the approach taken by practitioners that actually apply statistics to something (or in Gelman’s terms, do science)

replies(3): >>41081070 #>>41081400 #>>41083494 #
refulgentis ◴[] No.41081070[source]
I see, so academics are frequentists (attackers) or objective Bayes (naive), and the people Doing Science are pragmatic (correct).

The article gave me the same vibe, nice, short set of labels for me to apply as a heuristic.

I never really understood this particular war, I'm a simpleton, A in Stats 101, that's it. I guess I need to bone up on Wikipedia to understand what's going on here more.

replies(4): >>41081106 #>>41081242 #>>41081312 #>>41081388 #
sgt101 ◴[] No.41081242[source]
Bayes lets you use your priors, which can be very helpful.

I got all riled up when I saw you wrote "correct", I can't really explain why... but I just feel that we need to keep an open mind. These approaches to data are choices at the end of the day... Was Einstein a Bayesian? (spoiler: no)

replies(2): >>41081356 #>>41081474 #
0cf8612b2e1e ◴[] No.41081474{3}[source]
Using your priors is another way of saying you know something about the problem. It is exceedingly difficult to objectively analyze a dataset without interjecting any bias. There are too many decision points where something needs to be done to massage the data into shape. Priors is just an explicit encoding of some of that knowledge.
replies(1): >>41083562 #
ants_everywhere ◴[] No.41083562{4}[source]
> Priors is just an explicit encoding of some of that knowledge.

A classic example is analyzing data on mind reading or ghost detection. Your experiment shows you that your ghost detector has detected a haunting with p < .001. What is the probability the house is haunted?

replies(2): >>41085205 #>>41087754 #
1. lupusreal ◴[] No.41085205{5}[source]
With a prior like that, why would you even bother pretending to do the research?
replies(1): >>41096058 #
2. ants_everywhere ◴[] No.41096058[source]
Well, something could count as evidence that ghosts or ESP exist, but the evidence better be really strong.

A person getting 50.1% accuracy on an ESP experiment with a p-value less than some threshold doesn't cut it. But that doesn't mean the prior is insurmountable.

The closing down of loopholes in Bell inequality tests is a good example of a pretty aggressive prior being overridden by increasingly compelling evidence.