Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.
Unsurprisingly, gaining additional evidence can change our beliefs.
The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)
The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.
Now, software engineering:
We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:
- the software system is changed by other people - the software system has bugs (because it is non-trivial) - even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.
Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.