←back to thread

303 points FigurativeVoid | 1 comments | | HN request time: 0s | source
Show context
dsr_ ◴[] No.41847694[source]
[Among the problems with] Justifed True Beliefs being "knowledge" is that humans are very bad at accurately stating their beliefs, and when they state those beliefs, they often adopt the inaccurate statement as the actual belief.

Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.

Unsurprisingly, gaining additional evidence can change our beliefs.

The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)

The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.

Now, software engineering:

We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:

- the software system is changed by other people - the software system has bugs (because it is non-trivial) - even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.

Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.

replies(1): >>41847737 #
mjburgess ◴[] No.41847737[source]
Bayes theorem isnt a reasonable approximation, because it isnt answering the question -- it describes what you do when you have the answer.

With bayes, you're computing P(Model|Evidence) -- but this doesnt explain where Model comes from or why Evidence is relevant to model.

If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.

What's happening with animals is that we have a certain, deterministic, non-bayesian primitive model of our bodies from which we can build more complex models.

So we engage in causal reasoning, not bayesian updating: P(EvidenceCausedByMyBody| do(ActionOfMyBody)) * P(Model|Evidence)

replies(2): >>41848045 #>>41848100 #
cfiggers ◴[] No.41848100[source]
> If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.

...whoa. That makes complete sense.

So you're saying that there must be some form of meta-rationality that gives cues to our attempts at Bayesian reasoning, directing those attempts how to make selections from each set (the set of all possible models and the set of all sensory inputs) in order to produce results that constitute actual learning.

And you're suggesting that in animals and humans at least, the feedback loop of our embodied experience is at least some part of that meta-rationality.

That's an incredible one-liner.

replies(1): >>41848270 #
1. mjburgess ◴[] No.41848270[source]
In order to think, we move.