←back to thread

303 points FigurativeVoid | 1 comments | | HN request time: 0.206s | source
Show context
dsr_ ◴[] No.41847694[source]
[Among the problems with] Justifed True Beliefs being "knowledge" is that humans are very bad at accurately stating their beliefs, and when they state those beliefs, they often adopt the inaccurate statement as the actual belief.

Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.

Unsurprisingly, gaining additional evidence can change our beliefs.

The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)

The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.

Now, software engineering:

We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:

- the software system is changed by other people - the software system has bugs (because it is non-trivial) - even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.

Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.

replies(1): >>41847737 #
mjburgess ◴[] No.41847737[source]
Bayes theorem isnt a reasonable approximation, because it isnt answering the question -- it describes what you do when you have the answer.

With bayes, you're computing P(Model|Evidence) -- but this doesnt explain where Model comes from or why Evidence is relevant to model.

If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.

What's happening with animals is that we have a certain, deterministic, non-bayesian primitive model of our bodies from which we can build more complex models.

So we engage in causal reasoning, not bayesian updating: P(EvidenceCausedByMyBody| do(ActionOfMyBody)) * P(Model|Evidence)

replies(2): >>41848045 #>>41848100 #
bumby ◴[] No.41848045[source]
I'm not sure I'm understanding your stance fully, so please forgive any poor interpretation.

>certain, deterministic, non-bayesian primitive model of our bodies

What makes you certain the model of our body is non-Bayesian? Does this imply we have an innate model of our body and how it operates in space? I could be convinced that babies don't inherently have a model of their bodies (or that they control their bodies) and it is a learned skill. Possibly learned through some pseudo Bayesian process. Heck, the unathletic among us adults may still be updating our Bayesian priors with our body model, given how often it betrays our intentions :-)

replies(1): >>41848266 #
mjburgess ◴[] No.41848266[source]
Because bayesian conditioning doesn't resolve the direction of causation, and gives no way of getting 'certain' data which is an assumption of the method (, as well as assuming relevance).

In bayesian approaches it's assumed we have some implicity metatheory which gives us how the data relates to the model, so really all bayesian formulae should have an implicit 'Theory' condition which provides, eg., the actual probability value:

P(Model|Evidence, Theory(Model, Evidence))

The problem is there's no way of building such a theory using bayesianism, it ends in a kind of obvious regress: P(P(P(M|E, T1)|T2)|T3,...)

What theory provides the meaning of 'the most basic data'? ie., how it relates to the model? (and eg., how we compute such a probability).

The answer to all these problems is: the body. The body resolves the direction of causation, it also bootstraps reasoning.

In order to compute P(ShapeOfCup|GraspOnCup, Theory(Grasp, Shape)), I first (in early childhood) build such a theory by computing P(ShapeSensaton|do(GraspMovemnt), BasicTheory(BasicMotorActions, BasicSensations).

Were 'do' is non-bayesian conditioning, ie., it denotes the probability distribution which arises specifically from causal intervention. And "BasicTheory" has to be in-built.

In philosophical terms, the "BasicTheory" is something like Kant's synthetic a priori -- though there's many views on it. Most philosophers have realised, long before contemporary stats, that you cannot resolve the under-determination of theory by evidence without a prior theory.

replies(2): >>41849043 #>>41849205 #
1. ◴[] No.41849043[source]