Most active commenters

    ←back to thread

    Bayesian Statistics: The three cultures

    (statmodeling.stat.columbia.edu)
    309 points luu | 12 comments | | HN request time: 0s | source | bottom
    Show context
    thegginthesky ◴[] No.41080693[source]
    I miss the college days where professors would argue endlessly on Bayesian vs Frequentist.

    The article is very well succinct and even explains why even my Bayesian professors had different approaches to research and analysis. I never knew about the third camp, Pragmatic Bayes, but definitely is in line with a professor's research that was very through on probability fit and the many iteration to get the prior and joint PDF just right.

    Andrew Gelman has a very cool talk "Andrew Gelman - Bayes, statistics, and reproducibility (Rutgers, Foundations of Probability)", which I highly recommend for many Data Scientists

    replies(4): >>41080841 #>>41080979 #>>41080990 #>>41087094 #
    spootze ◴[] No.41080841[source]
    Regarding the frequentist vs bayesian debates, my slightly provocative take on these three cultures is

    - subjective Bayes is the strawman that frequentist academics like to attack

    - objective Bayes is a naive self-image that many Bayesian academics tend to possess

    - pragmatic Bayes is the approach taken by practitioners that actually apply statistics to something (or in Gelman’s terms, do science)

    replies(3): >>41081070 #>>41081400 #>>41083494 #
    1. refulgentis ◴[] No.41081070[source]
    I see, so academics are frequentists (attackers) or objective Bayes (naive), and the people Doing Science are pragmatic (correct).

    The article gave me the same vibe, nice, short set of labels for me to apply as a heuristic.

    I never really understood this particular war, I'm a simpleton, A in Stats 101, that's it. I guess I need to bone up on Wikipedia to understand what's going on here more.

    replies(4): >>41081106 #>>41081242 #>>41081312 #>>41081388 #
    2. Yossarrian22 ◴[] No.41081106[source]
    Academics can be pragmatic, I've know ones who've sued both Bayesian statistics and MLE
    3. sgt101 ◴[] No.41081242[source]
    Bayes lets you use your priors, which can be very helpful.

    I got all riled up when I saw you wrote "correct", I can't really explain why... but I just feel that we need to keep an open mind. These approaches to data are choices at the end of the day... Was Einstein a Bayesian? (spoiler: no)

    replies(2): >>41081356 #>>41081474 #
    4. thegginthesky ◴[] No.41081312[source]
    Frequentist and Bayesian are correct if both have scientific rigor in their research and methodology. Both can be wrong if the research is whack or sloppy.
    replies(1): >>41081940 #
    5. refulgentis ◴[] No.41081356[source]
    You're absolutely right, trying to walk a delicate tightrope that doesn't end up with me giving my unfiltered "you're wrong so lets end conversation" response.

    Me 6 months ago would have written: "this comment is unhelpful and boring, but honestly, that's slightly unfair to you, as it just made me realize how little help the article is, and it set the tone. is this even a real argument with sides?"

    For people who want to improve on this aspect of themselves, like I did for years:

    - show, don't tell (ex. here, I made the oddities more explicit, enough that people could reply to me spelling out what I shouldn't.)

    - Don't assert anything that wasn't said directly, ex. don't remark on the commenter, or subjective qualities you assess in the comment.

    6. runarberg ◴[] No.41081388[source]
    I understand the war between bayesians and frequentists. Frequentist methods have been misused for over a century now to justify all sorts of pseudoscience and hoaxes (as well as created a fair share of honest mistakes), so it is understandable that people would come forward and claim there must be a better way.

    What I don’t understand is the war between naive bayes and pragmatic bayes. If it is real, it seems like the extension of philosophers vs. engineers. Scientists should see value in both. Naive Bayes is important to the philosophy of science, without which there would be a lot of junk science which would go unscrutinized for far to long, and engineers should be able to see the value of philosophers saving them works by debunking wrong science before they start to implement theories which simply will not work in practice.

    7. 0cf8612b2e1e ◴[] No.41081474[source]
    Using your priors is another way of saying you know something about the problem. It is exceedingly difficult to objectively analyze a dataset without interjecting any bias. There are too many decision points where something needs to be done to massage the data into shape. Priors is just an explicit encoding of some of that knowledge.
    replies(1): >>41083562 #
    8. slashdave ◴[] No.41081940[source]
    I've used both in some papers and report two results (why not?). The golden rule in my mind is to fully describe your process and assumptions, then let the reader decide.
    9. ants_everywhere ◴[] No.41083562{3}[source]
    > Priors is just an explicit encoding of some of that knowledge.

    A classic example is analyzing data on mind reading or ghost detection. Your experiment shows you that your ghost detector has detected a haunting with p < .001. What is the probability the house is haunted?

    replies(2): >>41085205 #>>41087754 #
    10. lupusreal ◴[] No.41085205{4}[source]
    With a prior like that, why would you even bother pretending to do the research?
    replies(1): >>41096058 #
    11. laserlight ◴[] No.41087754{4}[source]
    The fact that you are designing an experiment and not trusting it is bonkers. The experiment concludes that the house is haunted and you've already agreed that it would be so before the experiment.
    12. ants_everywhere ◴[] No.41096058{5}[source]
    Well, something could count as evidence that ghosts or ESP exist, but the evidence better be really strong.

    A person getting 50.1% accuracy on an ESP experiment with a p-value less than some threshold doesn't cut it. But that doesn't mean the prior is insurmountable.

    The closing down of loopholes in Bell inequality tests is a good example of a pretty aggressive prior being overridden by increasingly compelling evidence.