Most active commenters
  • mistermann(3)

←back to thread

303 points FigurativeVoid | 11 comments | | HN request time: 0.618s | source | bottom
1. merryocha ◴[] No.41847997[source]
I was a philosophy major in college and semantic quibbling over Gettier problems was popular while I was there. I have always believed that Gettier's popularity was due to the fact that the paper was only three pages, and therefore it was the only paper that the academics actually read to the end. I never thought there was anything particularly deep or noteworthy about the problem at all - it is fundamentally a debate over the definition of knowledge which you could debate forever, and that's exactly what they were doing - arguing about the definition of knowledge, one 30-page paper at a time.
replies(3): >>41848090 #>>41849189 #>>41850323 #
2. artursapek ◴[] No.41848090[source]
I was going to say, I don’t even understand how the second example in this post is a gettier. He thought one event caused the issue to start, but a different event did instead. And they happened around the same time. Ok? This doesn’t seem very philosophical to me.
replies(1): >>41850995 #
3. jerf ◴[] No.41849189[source]
This is one of the places that I think some training in "real math" can help a lot. At the highest levels I think philosophers generally understand this, but a lot of armchair philosophers and even some nominally trained and credentialed ones routinely make the mistake of thinking there is a definition of "knowledge", and that arguing and fighting over what it is is some sort of meaningful activity, as if, if we could just all agree on what "knowledge" is that will somehow impact the universe in some presumably-beneficial way. That somehow the word itself is important and has its own real ontological existence, and if we can just figure out "exactly what 'knowledge' really is" we'll have achieved something.

But none of that is actually true. Especially the part where it will have some sort of meaningful impact if we can just nail it down, let alone whether it would be beneficial or not.

There are many definitions of knowledge. From a perspective where you only know something if you are 100% sure about something and also abstractly "correct", which I call "abstract" because the whole problem in the first place is that we all lack access to an oracle that will tell us whether or not we are correct about a fact like "is there a cow in the field?" and so making this concrete is not possible, we end up in a very Descartian place where just about all you "know" is that you exist. There's some interesting things to be said about this definition, and it's an important one philosophically and historically, but... it also taps out pretty quickly. You can only build on "I exist" so far before running out of consequences, you need more to feed your logic.

From another perspective, if we take a probabilistic view of "knowledge", it becomes possible to say "I see a cow in the field, I 'know' there's a cow there, by which I mean, I have good inductive reasons to believe that what I see is in fact a cow and not a paper mâché construct of a cow, because inductively the probability that someone has set up a paper mâché version of the cow in the field is quite low." Such knowledge can be wrong. It isn't just a theoretical philosophy question either, I've seen things set up in fields as a joke, scarecrows good enough to fool me on a first glance, lawn ornamentation meant to look like people as a joke that fooled me at a distance, etc. It's a real question. But you can still operate under a definition of knowledge where I still had "knowledge" that a person was there, even though the oracle of truth would have told me that was wrong. We can in fact build on a concept of "knowledge" in which it "limits" to the truth, but doesn't necessarily ever reach there. It's more complicated, but also a lot more useful.

And I'm hardly exhausting all the possible interesting and useful definitions of knowledge in those two examples. And the latter is a class of definitions, not one I nailed down entirely in a single paragraph.

Again, I wouldn't accuse the most-trained philosophers of this in general, but the masses of philosophers also tend to spend a lot of time spinning on "I lack access to an oracle of absolute truth". Yup. It's something you need to deal with, like "I think, therefore I am, but what else can I absolutely 100% rigidly conclude?", but it's not very productive to spin on it over and over, in manifestation after manifestation. You don't have one. Incorporate that fact and move on. You can't define one into existence. You can't wish one into existence. You can't splat ink on a page until you've twisted logic into a pretzel and declared it that it is somehow necessary. If God does exist, which I personally go with "Yes" on, but either way, He clearly is not just some database to be queried whenever we wonder "Hey, is that a cow out there?" If you can't move on from that, no matter how much verbiage you throw at the problem, you're going to end up stuck in a very small playground. Maybe that's all they want or are willing to do, but it's still going to be a small playground.

replies(1): >>41853230 #
4. feoren ◴[] No.41850323[source]
The vast majority of philosophical arguments are actually arguments about definitions of words. You can't actually be "wrong" in philosophy -- they never prove ideas wrong and reject them (if they did, we'd just call it "science"), so it's just an ever-accumulating body of "he said, she said". If you ask a philosophical question, that's the answer you get: "well, Aristotle said this, and Kant said that, and Descartes said this, and Searle said that." "... so, what's the answer?" "I just told you." So if you want to actually argue about something, you argue about definitions.
replies(1): >>41850514 #
5. goatlover ◴[] No.41850514[source]
Science doesn't prove things, it provides empirical support for or against theories. Philosophical ideas can be shown to be wrong if their reasoning is shown to be invalid. Words have meaning, and philosophical arguments are over the meaning of those words. The problem is there is a "loose fit between mind and world", as one contemporary philosopher put it. We naively thinks words describe the word as is, but they really don't. There's all sorts of problems with the meaning of our words when examined closely.

For example, it feels like we have free will to many people, but the meaning is hard to pin down, and there are all sorts of arguments for and against that experience of being able to freely choose. And what that implies for things like punishment and responsibility. It's not simply an argument over words, it's an argument over something important to the human experience.

replies(2): >>41850703 #>>41853260 #
6. Maxatar ◴[] No.41850995[source]
That's what a gettier is, it's when you have a justified true belief about a proposition but the justification is merely coincidental. You were still justified to believe the proposition, the proposition is still true, and so under the "justified, true, belief" model of knowledge you would be considered to have known the proposition, and yet as this example and others demonstrate, that's not really what we'd consider to be knowledge, indicating that knowledge is more than justified, true, belief.

Whether you agree or disagree is a separate matter and something you can discuss or ponder for 5 minutes. The article is about taking a somewhat interesting concept from philosophy and applying it to a routine software development scenario.

replies(1): >>41858763 #
7. mistermann ◴[] No.41853230[source]
Are you one of the rare individuals who was cool as a cucumber during the various mass psychological meltdowns we experienced as a consequence of wars, pandemics and various other causes of human death in the last few years?

Also: how did you come to know all the things you claim to in your comment (and I suspect in a few others in your history)?

8. mistermann ◴[] No.41853260{3}[source]
> Science doesn't prove things, it provides empirical support for or against theories.

There's been some progress science must have missed out on then:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8207024/

That is one organization, many others claim they've also achieved the impossible.

replies(1): >>41853717 #
9. goatlover ◴[] No.41853717{4}[source]
Since this is a discussion on philosophy in the context of knowledge and metaphysics, scientific organizations don't claim they provide proof (in the sense of logic and truth), rather they provide rigorous scientific evidence to support their claims, such as vaccines not causing autism. But science is always subject to future revision if more evidence warrants it. There is no truth in the 100% certainty sense or having reached some final stage of knowledge. The world can always turn out to be different than we think. This is certainly true in the messy and complex fields of biology and medicine.
replies(1): >>41858734 #
10. mistermann ◴[] No.41858734{5}[source]
Your claims are demonstrably false, there are many instances of authoritative organizations that explicitly and literally assert that vaccines do not cause autism.

Out of curiosity, can you realize I am arguing from a much more advantageous position, in that I only have to find one exception to your popular "scientific organizations don't claim" meme (which I (and also you) can directly query on Google, and find numerous instances from numerous organizations), whereas you would have had to undertaken a massive review of all communications (and many forms of phrasing) from these groups, something we both know you have not done?

A (portion of) the (I doubt intentional or malicious) behavior is described here:

https://en.m.wikipedia.org/wiki/Motte-and-bailey_fallacy

I believe the flaw in scientists (and their fan base) behavior is mainly (but not entirely) a manifestation of a defect in our culture, which is encoded within our minds, which drives our behavior. Is this controversial from an abstract perspective?

It is possible to dig even deeper in our analysis here to make it even more inescapable (though not undeniable) what is going on here, with a simple series of binary questions ("Is it possible that...") that expand the context. I'd be surprised if you don't regularly utilize this form of thinking when it comes to debugging computers systems.

Heck, I'm not even saying this is necessarily bad policy, sometimes deceit is literally beneficial, and this seems like a prime scenario for it. If I was in power, I wouldn't be surprised if I too would take the easy way out, at least in the short term.

11. artursapek ◴[] No.41858763{3}[source]
Right but merely what time two events occurred doesn’t seem like enough to “justify” a belief. It can be a suspicion but that’s about it.