←back to thread

303 points FigurativeVoid | 6 comments | | HN request time: 1.039s | source | bottom
Show context
dsr_ ◴[] No.41847694[source]
[Among the problems with] Justifed True Beliefs being "knowledge" is that humans are very bad at accurately stating their beliefs, and when they state those beliefs, they often adopt the inaccurate statement as the actual belief.

Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.

Unsurprisingly, gaining additional evidence can change our beliefs.

The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)

The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.

Now, software engineering:

We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:

- the software system is changed by other people - the software system has bugs (because it is non-trivial) - even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.

Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.

replies(1): >>41847737 #
mjburgess ◴[] No.41847737[source]
Bayes theorem isnt a reasonable approximation, because it isnt answering the question -- it describes what you do when you have the answer.

With bayes, you're computing P(Model|Evidence) -- but this doesnt explain where Model comes from or why Evidence is relevant to model.

If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.

What's happening with animals is that we have a certain, deterministic, non-bayesian primitive model of our bodies from which we can build more complex models.

So we engage in causal reasoning, not bayesian updating: P(EvidenceCausedByMyBody| do(ActionOfMyBody)) * P(Model|Evidence)

replies(2): >>41848045 #>>41848100 #
bumby ◴[] No.41848045[source]
I'm not sure I'm understanding your stance fully, so please forgive any poor interpretation.

>certain, deterministic, non-bayesian primitive model of our bodies

What makes you certain the model of our body is non-Bayesian? Does this imply we have an innate model of our body and how it operates in space? I could be convinced that babies don't inherently have a model of their bodies (or that they control their bodies) and it is a learned skill. Possibly learned through some pseudo Bayesian process. Heck, the unathletic among us adults may still be updating our Bayesian priors with our body model, given how often it betrays our intentions :-)

replies(1): >>41848266 #
1. mjburgess ◴[] No.41848266[source]
Because bayesian conditioning doesn't resolve the direction of causation, and gives no way of getting 'certain' data which is an assumption of the method (, as well as assuming relevance).

In bayesian approaches it's assumed we have some implicity metatheory which gives us how the data relates to the model, so really all bayesian formulae should have an implicit 'Theory' condition which provides, eg., the actual probability value:

P(Model|Evidence, Theory(Model, Evidence))

The problem is there's no way of building such a theory using bayesianism, it ends in a kind of obvious regress: P(P(P(M|E, T1)|T2)|T3,...)

What theory provides the meaning of 'the most basic data'? ie., how it relates to the model? (and eg., how we compute such a probability).

The answer to all these problems is: the body. The body resolves the direction of causation, it also bootstraps reasoning.

In order to compute P(ShapeOfCup|GraspOnCup, Theory(Grasp, Shape)), I first (in early childhood) build such a theory by computing P(ShapeSensaton|do(GraspMovemnt), BasicTheory(BasicMotorActions, BasicSensations).

Were 'do' is non-bayesian conditioning, ie., it denotes the probability distribution which arises specifically from causal intervention. And "BasicTheory" has to be in-built.

In philosophical terms, the "BasicTheory" is something like Kant's synthetic a priori -- though there's many views on it. Most philosophers have realised, long before contemporary stats, that you cannot resolve the under-determination of theory by evidence without a prior theory.

replies(2): >>41849043 #>>41849205 #
2. ◴[] No.41849043[source]
3. bumby ◴[] No.41849205[source]
Is the extension of your position that we are born with a theory of the body, irrespective of experience? How does that relate to the psychological literature where babies seem to lack a coherent sense of self? I.e., they can't differentiate what is "body" and what is "not body"?

If it's an ability that later develops independent of experience with the exterior world, it seems untestable. I.e., how can you test the theory without a baby being in the world in the first place?

replies(1): >>41849422 #
4. mjburgess ◴[] No.41849422[source]
It might be that its vastly more minimal than it appears I'm stating. I already agree with the high adaptability of the motor system -- indeed, that's a core part of my point, since its this system which does the heavy lifting of thinking.

Eg., it might be that the kind of "theory" which exists is un/pre-conscious. So that it takes a long time, comparatively, for the baby to become aware of it. Until the baby has a self-conception it cannot consciously form the thought "I am grasping" -- however, consciousness imv is a derivative-abstracting process over-and-above the sensory motor system.

So the P(Shape|do(Grasp), BasicTheory(Grasp, Shape)) actually describes something like a sensory-motor 'structure' (eg., a distribution of shapes associated with sensory-motor actions). The proposition that "I am grasping" which allows expressing a propositional confidence requires (self-)consciousness: P(Shape|"I have grasped", Theory(Grasp, Shape)) -- bayesianism only makes sense when the arguments of probability are propositions (since its about beliefs).

What's the relationship between the bayesian P(Shape|"I have...") and the causal P(Shape|do(Grasp)) ? The baby requires a conscious bridge from the 'latent structural space' of the sensory-motor system to the intentional belief-space of consciousness.

So P(Shape|do(Grasp)) "consciously entails" P(Shape| "I have..") iff the baby has to developed a theory, Theory(MyGrasping|Me)

But, perhaps counter-intutively, it is not this theory which allows the baby to reliably compute the shape based on knowing "its their action". It's only the sensory-motor system which needs to "know" (metaphorically) that the grapsing is of the shape.

Maybe a better way of putting it then is that the baby requires a procedural mechanism which (nearly-) guarentees that it's actions are causally associated with its sensations such that it's sensations and actions are in a reliable coupling. This 'reliable coupling' has to provide a theory, in a minimal sense, of how likely/relevant/salient/etc. the experiences are given the actions

It is this sort of coupling which allows the baby, eventually, to develop an explicit conscious account of its own existence.

replies(1): >>41849926 #
5. bumby ◴[] No.41849926{3}[source]
I think that makes sense as a philosophical thought, but do you think it's testable to actually tell us anything about the human condition?

E.g., If motor movement and causal inference are coupled, would you expect a baby born with locked in syndrome to have a limited notion of self?

replies(1): >>41850632 #
6. mjburgess ◴[] No.41850632{4}[source]
Probably one of the most important muscles is in the eye. If all the muscles of the body are paralysed from birth, yes, no concepts would develop.

This is not only testable, but central to neuroscience, and i'd claim, to any actual science of intelligence -- rather the self-aggrandising csci mumbojumbo.

On the testing side, you can lesion various parts of the sensory-motor system of mice, run them in various maze-solving experiments under various conditions (etc.) and observe their lack of ability to adapt to novel environments.