←back to thread

303 points FigurativeVoid | 8 comments | | HN request time: 1.066s | source | bottom
Show context
jstrieb ◴[] No.41842593[source]
Relevant (deleted, as far as I can tell) tweet:

> When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.

https://old.reddit.com/r/PhilosophyMemes/comments/gggqkv/get...

replies(3): >>41843302 #>>41848022 #>>41848257 #
1. CamperBob2 ◴[] No.41843302[source]
Hmm. That seems like a better example of the problem than either of the examples at https://en.wikipedia.org/wiki/Gettier_problem .

The cases cited in the article don't seem to raise any interesting issues at all, in fact. The observer who sees the dark cloud and 'knows' there is a fire is simply wrong, because the cloud can serve as evidence of either insects or a fire and he lacks the additional evidence needed to resolve the ambiguity. Likewise, the shimmer in the distance observed by the desert traveler could signify an oasis or a mirage, so more evidence is needed there as well before the knowledge can be called justified.

I wonder if it would make sense to add predictive power as a prerequisite for "justified true knowledge." That would address those two examples as well as Russell's stopped-clock example. If you think you know something but your knowledge isn't sufficient to make valid predictions, you don't really know it. The Zoom background example would be satisfied by this criterion, as long as intentional deception wasn't in play.

replies(5): >>41844783 #>>41845544 #>>41845689 #>>41845828 #>>41848089 #
2. Cushman ◴[] No.41844783[source]
It’s not super clear there, but those are examples of a pre-Gettier type of argument that originally motivated strengthening, and externalizing, the J in JTB knowledge— just like you’re doing!

Gettier’s contribution — the examples with Smith — sharpens it to a point by making the “knowledge” a logical proposition — in one example a conjunction, in one a disjunction — such that we can assert that Smith’s belief in the premise is justified, while allowing the premise to be false in the world.

It’s a fun dilemma: the horns are, you can give up justification as sufficient, or you can give up logical entailment of justification.

But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma.

One can even keep buying original recipe JTB, as long as one is willing to bite the bullet that we can flip the “knowledge” bit by changing superficially irrelevant states of the world. And hey, why not?

replies(1): >>41877271 #
3. acchow ◴[] No.41845544[source]
The example is also a joke. Many things shown on screens are CGI or AI-generated, so the belief here is not justified.
4. roenxi ◴[] No.41845689[source]
Well ... obviously any Gettier-style example will not have enough evidence because someone came to the wrong conclusion. But there is a subtle flaw in your objections to Wikipedia's examples - to have a proper argument you would need to provide a counterexample where there is enough evidence to be certain of a conclusion. And the problem is that isn't possible - no amount of evidence is enough to come to a certain conclusion.

The issue that Gettier & friends is pointing to is that there are no examples where there is enough evidence. So under the formal definition it isn't possible to have a JTB. If you've seen enough evidence to believe something ... maybe you'd misinterpreted the evidence but still came to the correct conclusion. That scenario can play out at any evidence threshold. All else failing, maybe you're having an episode of insanity and all the information your senses are reporting are wild hallucinations but some of the things you imagine happening are, nonetheless, happening.

5. ImHereToVote ◴[] No.41845828[source]
I believe traders call this Lambda.
6. bonoboTP ◴[] No.41848089[source]
One should distinguish between one instance and a mechanism/process for producing them. We could take randomness and entropy as an analogy: Shannon entropy quantifies randomness of a sequence generator, not the randomness/complexity of individual instances (which would be more akin to Kolmogorov complexity).

Similarly, the real interesting stuff regards the reliability and predictive power of knowledge-producing mechanisms, not individual pieces produced by it.

Another analogy is confidence intervals, which are defined through a collective property, a confidence interval is an interval produced by a confidence process and the meat of the definition concerns the confidence process, not its output.

I always found the Gettier problems unimpressive and mainly a distraction and a language game. Watching out for smoke-like things to infer whether there is a fire is a good survival tool in the woods and advisable behavior. Neither it nor anything else is a 100% surefire way to obtain bulletproof capital-letter Truth. We are never 100% justified ("what if you're in a simulation?", "you might be a Boltzmann brain!"). Even stuff like math is uncertain and we may make a mistake when mentally adding 7454+8635, we may even have a brainfart when adding 2+2, it's just much less likely, but I'm quite certain that at least one human manages to mess up 2+2 in real life every day.

It's a dull and uninteresting question whether it's knowledge. What do you want to use the fact of it being knowledge or not for? Will you trust stuff that you determine to be knowledge and not other things? Or is it about deciding legal court cases? Because then it's better to cut the middle man and directly try to determine whether it's good to punish something or not, without reference to terms like "having knowledge".

replies(1): >>41850267 #
7. efitz ◴[] No.41850267[source]
Like arguing which level of the OSI model a particular function of a network stack operates at. I’d love to have those hours back from 20’s me.
8. ryanjamurphy ◴[] No.41877271[source]
> But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma. One can even keep buying original recipe JTB [...]

Sorry, naive questions: what is a terrifying dilemma to 21st century epistemologist? What is the "modern" recipe?