←back to thread

303 points FigurativeVoid | 1 comments | | HN request time: 0s | source
Show context
jstrieb ◴[] No.41842593[source]
Relevant (deleted, as far as I can tell) tweet:

> When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.

https://old.reddit.com/r/PhilosophyMemes/comments/gggqkv/get...

replies(3): >>41843302 #>>41848022 #>>41848257 #
CamperBob2 ◴[] No.41843302[source]
Hmm. That seems like a better example of the problem than either of the examples at https://en.wikipedia.org/wiki/Gettier_problem .

The cases cited in the article don't seem to raise any interesting issues at all, in fact. The observer who sees the dark cloud and 'knows' there is a fire is simply wrong, because the cloud can serve as evidence of either insects or a fire and he lacks the additional evidence needed to resolve the ambiguity. Likewise, the shimmer in the distance observed by the desert traveler could signify an oasis or a mirage, so more evidence is needed there as well before the knowledge can be called justified.

I wonder if it would make sense to add predictive power as a prerequisite for "justified true knowledge." That would address those two examples as well as Russell's stopped-clock example. If you think you know something but your knowledge isn't sufficient to make valid predictions, you don't really know it. The Zoom background example would be satisfied by this criterion, as long as intentional deception wasn't in play.

replies(5): >>41844783 #>>41845544 #>>41845689 #>>41845828 #>>41848089 #
bonoboTP ◴[] No.41848089[source]
One should distinguish between one instance and a mechanism/process for producing them. We could take randomness and entropy as an analogy: Shannon entropy quantifies randomness of a sequence generator, not the randomness/complexity of individual instances (which would be more akin to Kolmogorov complexity).

Similarly, the real interesting stuff regards the reliability and predictive power of knowledge-producing mechanisms, not individual pieces produced by it.

Another analogy is confidence intervals, which are defined through a collective property, a confidence interval is an interval produced by a confidence process and the meat of the definition concerns the confidence process, not its output.

I always found the Gettier problems unimpressive and mainly a distraction and a language game. Watching out for smoke-like things to infer whether there is a fire is a good survival tool in the woods and advisable behavior. Neither it nor anything else is a 100% surefire way to obtain bulletproof capital-letter Truth. We are never 100% justified ("what if you're in a simulation?", "you might be a Boltzmann brain!"). Even stuff like math is uncertain and we may make a mistake when mentally adding 7454+8635, we may even have a brainfart when adding 2+2, it's just much less likely, but I'm quite certain that at least one human manages to mess up 2+2 in real life every day.

It's a dull and uninteresting question whether it's knowledge. What do you want to use the fact of it being knowledge or not for? Will you trust stuff that you determine to be knowledge and not other things? Or is it about deciding legal court cases? Because then it's better to cut the middle man and directly try to determine whether it's good to punish something or not, without reference to terms like "having knowledge".

replies(1): >>41850267 #
1. efitz ◴[] No.41850267{3}[source]
Like arguing which level of the OSI model a particular function of a network stack operates at. I’d love to have those hours back from 20’s me.