←back to thread

Bayesian Statistics: The three cultures

(statmodeling.stat.columbia.edu)
309 points luu | 3 comments | | HN request time: 0s | source
Show context
prmph ◴[] No.41083029[source]
So my theory is that probability is an ill-defined, unfalsifiable concept. And yet, it _seems_ to model aspects of the world pretty well, empirically. However, might it be leading us astray?

Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

If it is not a proposition, what does it actually mean? If someone with more knowledge can chime in here, I'd be grateful. I've got much more to say on this, but only after I hear from those with a rigorous grounding the theory.

replies(8): >>41083141 #>>41083535 #>>41083541 #>>41083730 #>>41084029 #>>41084108 #>>41085582 #>>41086995 #
1. skissane ◴[] No.41083541[source]
> So my theory is that probability is an ill-defined, unfalsifiable concept

Probability isn’t a single concept, it is a family of related concepts - epistemic probability (as in subjective Bayesianism) is a different concept from frequentist probability - albeit obviously related in some ways. It is unsurprising that a term looks like an “ill-defined, unfalsifiable concept” if you are mushing together mutually incompatible definitions of it.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean?

From a subjective Bayesian perspective, p(X) is a measure of how much confidence I - or any other specified person - have in the truth of a proposition, or my own judgement of the weight of evidence for or against it, or my judgement of the degree of my own knowledge of its truth or falsehood. And 0.5 means I have zero confidence either way, I have zero evidence either way (or else, the evidence on each side perfectly cancels each other out), I have a complete lack of knowledge as to whether the proposition is true.

> It it a proposition?

It is a proposition just in the same sense that “the Pope believes that God exists” is a proposition. Whether or not God actually exists, it seems very likely true that the Pope believes he does

> If so, is it falsifiable? And how?

And obviously that’s falsifiable, in the same sense that claims about my own beliefs are trivially falsifiable by me, using my introspection. And claims about other people’s beliefs are also falsifiable, if we ask them, and if assuming they are happy to answer, and we have no good reason to think they are being untruthful.

replies(1): >>41083743 #
2. prmph ◴[] No.41083743[source]
So you response actually strengthens my point, rather than rebuts it.

> From a subjective Bayesian perspective, p(X) is a measure of how much confidence I - or any other specified person - have in the truth of a proposition, or my own judgement of the weight of evidence for or against it, or my judgement of the degree of my own knowledge of its truth or falsehood.

See how inexact and vague all these measures are. How do you know your confidence is (or should be) 0.5 ( and not 0.49) for example? Or, how to know you have judged correctly the weight of evidence? Or how do you know the transition from "knowledge about this event" to "what it indicates about its probability" you make in your mind is valid? You cannot disprove these things, can you?

Unless you you want to say the actual values do not actually matter, but the way the probabilities are updated in the face of new information is. But in any case, the significance of new evidence still has to be interpreted; there is no objective interpretation, is there?.

replies(1): >>41083841 #
3. skissane ◴[] No.41083841[source]
> See how inexact and vague all these measures are. How do you know your confidence is (or should be) 0.5 ( and not 0.49) for example?

Well, you don't, but does it matter? The idea is it is an estimate.

Let me put it this way: we all informally engage in reasoning about how likely it is (given the evidence available to us) that a given proposition is true. The idea is that assigning a numerical estimate to our sense of likelihood can (sometimes) be a helpful tool in carrying out reasoning. I might think "X is slightly more likely than ~X", but do I know whether (for me) p(X) = 0.51 or 0.501 or 0.52? Probably not. But I don't need a precise estimate for an estimate to be helpful. And that's true in many other fields, including things that have nothing to do with probability – "he's about six feet tall" can be useful information even though it isn't accurate to the millimetre.

> Or, how to know you have judged correctly the weight of evidence?

That (largely) doesn't matter from a subjective Bayesian perspective. Epistemic probabilities are just an attempt to numerically estimate the outcome of my own process of weighing the evidence – how "correctly" I've performed that process (per any given standard of correctness) doesn't change the actual result.

From an objective Bayesian perspective, it does – since objective Bayesianism is about, not any individual's actual sense of likelihood, rather what sense of likelihood they ought to have (in that evidential situation), what an idealised perfectly rational agent ought to have (in that evidential situation). But that's arguably a different definition of probability from the subjective Bayesian, so even if you can poke holes in that definition, those holes don't apply to the subjective Bayesian definition.

> Or how do you know the transition from "knowledge about this event" to "what it indicates about its probability" you make in your mind is valid?

I feel like you are mixing up subjective Bayesianism and objective Bayesianism and failing to carefully distinguish them in your argument.

> But in any case, the significance of new evidence still has to be interpreted; there is no objective interpretation, is there?.

Well, objective Bayesianism requires there be some objective standard of rationality, subjective Bayesianism doesn't (or, to the extent that it does, the kind of objective rationality it requires is a lot weaker, mere avoidance of blatant inconsistency, and the minimal degree of rationality needed to coherently engage in discourse and mathematics.)