The changes only had adjacency to the causes and that’s super common on any system that has a few core pieces of functionality.
I think the core lesson here is that if you can’t fully explain the root cause, you haven’t found the real reason, even if it seems related.
I try a lot of obvious things when debugging to ascertain the truth. Like, does undoing my entire change fix the bug?
Is the crux of the argument that justification is an arbitrary line and ultimately insufficient?
― Ludwig Wittgenstein
When this kind of thing tries to surface, it’s a warning that you need to 10x your understanding of the problem space you are adjacent to.
But you're staff and above when you can understand when your programming model is broken, and how to experiment to find out what it really is. That almost always goes beyond the specified and tested behaviors (which might be incidentally correct) to how the system should behave in untested and unanticipated situations.
Not surprisingly, problems here typically stem from gaps in the programming model between developers or between departments, who have their own perspective on the elephant, and their incidence in production is an inverse function of how well people work together.
Of course that devolves rapidly into trying to find the "base case" of knowledge that are inherent
The classic and oft heard “How did this ever work?”
These are correct but contrived and unrealistic, so later examples are more plausible (e.g. being misled by a mislabelled television program from a station with a strong track record of accuracy).
The point is not disproving justified true belief so much as showing the inadequacy of any one formal definition: at some point we have to elevate evidence to assumption and there's not a one-size-fits-all way to do that correctly. And, similarly to the software engineering problems, a common theme is the ways you can get bitten by looking at simple and seemingly true "slices" of a problem which don't see a complex whole.
It is worth noting that Gettier himself was cynical and dismissive of this paper, claiming he only wrote it to get tenure, and he never wrote anything else on the topic. I suspect he didn't find this stuff very interesting, though it was fashionable.
By all means you can gain a lot by making things easier to understand, but only in service of shortcuts while developing or debugging. But this kind of understanding is not the foundation your application can safely stand on. You need detailed visibility into what the system is genuinely doing, and our mushy brains do a poor job of emulating any codebase, no matter how elegant.
I think a case can't so much "disprove" JTB, so much as illustrate that adopting a definition of knowledge is more complex than you might naively believe.
That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.
But most people tend not to include that in the "your assumptions" list, and frequently it is the source of the bug.
I think the philosophical claim is that, when we think we know something, and the thing that we turns out to be false, what has happened isn't that we knew something false, but rather that we didn't actually know the thing in the first place. That is, not our knowledge, but our belief that we had knowledge, was mistaken.
(Of course, one can say that we did after all know it in any conventional sense of the word, and that such a distinction is at the very best hair splitting. But philosophy is willing to split hairs however finely reason can split them ….)
> When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.
https://old.reddit.com/r/PhilosophyMemes/comments/gggqkv/get...
To say that this is not "knowing" is (as another commenter noted) hair-splitting of the worst kind. In every sense it is a justified belief that happens to be false (we just do not know that yet).
On Jan 1 2024 I "know" X. Time passes. On Jan 1 2028, I "know" !X. In both cases, there is
(a) something it is like to "know" either X or !X
(b) discernible brain states the correspond to "knowing" either X or !X and that are distinct from "knowing" neither
Thus, even if you don't want to call "knowing X" actually "knowing", it is in just about every sense indistinguishable from "knowing !X".
Also, a belief that we had the knowledge that relates to X is indistinguishable from a belief that we had the knowledge that relates to !X. In both cases, we possess knowledge which may be true or false. The knowledge we have at different times alters; at all times we have a belief that we have the knowledge that justifies X or !X, and we are correct in that belief - it is only the knowledge itself that is false.
Love it
(we’ll need a few thousand of these, and the off the shelf solution is around 1k vs $1.50 for RYO )
By the way, the RISC V espressif esp32-C3 is a really amazing device for < $1. It’s actually cheaper to go modbus-tcp over WiFi then to actually put RS485 on the board like with a MAX485 and the associated components. Also does ZIGBEE and BT, and the espressif libraries for the radio stack are pretty good.
Color me favorably impressed with this platform.
You evidently want to use the word "know" exclusively to describe a brain state, but many people use it to mean a different thing. Those people are the ones who are having this debate. It's true that you can render this debate, like any debate, into nonsense by redefining the terms they are using, but that in itself doesn't mean that it's inherently nonsense.
Maybe you're making the ontological claim that your beliefs about X don't actually become definitely true or false until you have a way to tell the difference? A sort of solipsistic or idealistic worldview? But you seem to reject that claim in your last sentence, saying, "it is only the knowledge itself that is false."
One way is to reason from a false premise, or as I would put it, something we think is true is not true.
The other way is to mix logical levels (“this sentence is false”).
I don’t think I ever encountered a bug from mixing logical levels, but the false premise was a common culprit.
If someone is just going to say "It is not possible to know false things", then sure, by that definition of "know" any brain state that involves a justified belief in a thing that is false is not "knowing".
But I consider that a more or less useless definition of "knowing" in context of both Gettier and TFA.
The strength of the justification is, I would suggest, largely subjective.
The cases cited in the article don't seem to raise any interesting issues at all, in fact. The observer who sees the dark cloud and 'knows' there is a fire is simply wrong, because the cloud can serve as evidence of either insects or a fire and he lacks the additional evidence needed to resolve the ambiguity. Likewise, the shimmer in the distance observed by the desert traveler could signify an oasis or a mirage, so more evidence is needed there as well before the knowledge can be called justified.
I wonder if it would make sense to add predictive power as a prerequisite for "justified true knowledge." That would address those two examples as well as Russell's stopped-clock example. If you think you know something but your knowledge isn't sufficient to make valid predictions, you don't really know it. The Zoom background example would be satisfied by this criterion, as long as intentional deception wasn't in play.
Does the code have 0 defects, 1 defect, or 2 defects?
How careful do you have to be to never be fooled? For most people, a non-zero error rate is acceptable. Their level of caution will be adjusted based on their previous error rate. (Seen in this sense, perfect knowledge in a philosophical sense is a quest for a zero error rate.)
In discussions of how to detect causality, one example is flipping a light switch to see if it makes the light go on and off. How many flips do you need in order to be sure it’s not coincidence?
This is where Contextualism comes into play. Briefly, your epistemic demands are determined by your circumstances.
https://plato.stanford.edu/entries/contextualism-epistemolog...
security with cryptography is mostly about logical level problems, where each key or operation forms a layer or box. treating these as discrete states or things is also an abstraction over a seqential folding and mixing process.
debugging a service over a network has the whole stack as logical layers.
most product management is solving technical problems at a higher level of abstraction.
a sequence diagram can be a multi-layered abstraction rotated 90 degrees, etc.
Desperation to ‘know’ something for certain can be misleading when coincidence is a lot more common than proof.
Worse yet is extending the feeling of ‘justified’ to somehow ‘lessen’ any wrongness, perhaps instead of a more informative takeaway.
E.g. a neurologist would likely be happy to speak of a brain knowing false information, but a psychologist would insist that that’s not the right word. And that’s not even approaching how this maps to close-but-not-quite-exact translations of the word in other languages…
This is one of the best questions ever, not just for philosophers, but for all us regular plebes to ponder often. The number of things I know is very very small, and the number of things I believe dramatically outnumbers the things I know. I believe, but don’t know, that this is true for everyone. ;) It seems pretty apparent, however, that we can’t know everything we believe, or nothing would ever get done. We can’t all separately experience all things known first-hand, so we rely on stories and the beliefs they invoke in order to survive and progress as a species.
The more likely a bug is to make me look dumb, it will only appear as soon as I ask for help.
We purposefully try not to do rebases in my team for this reason.
In other words, it looks like a form of solipsism.
Furthermore, OP’s choice of putting “know” in quotes seems to suggest that author is not using the word as conventionally understood (though, of course, orthography is not an infallible guide to intent.)
IMHO, Gettier cases are useful only on that they raise the issue of what constitutes an acceptable justification for a belief to become knowledge.
Gettier clauses are specifically constructed to be about true beliefs, and so do not challenge the idea that facts are true. Instead, one option to resolve the paradox is to drop the justification requirement altogether, but that opens the question of what, if anything, we can know we know. At this point, I feel that I am just following Hume’s footsteps…
EDIT: Deleted paragraph on DRY that wasn't quite right.
Gettier’s contribution — the examples with Smith — sharpens it to a point by making the “knowledge” a logical proposition — in one example a conjunction, in one a disjunction — such that we can assert that Smith’s belief in the premise is justified, while allowing the premise to be false in the world.
It’s a fun dilemma: the horns are, you can give up justification as sufficient, or you can give up logical entailment of justification.
But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma.
One can even keep buying original recipe JTB, as long as one is willing to bite the bullet that we can flip the “knowledge” bit by changing superficially irrelevant states of the world. And hey, why not?
To be able to claim there is a cow there requires additional evidence.
Only in abstract discussions like this one. And in some concrete discussions on certain topics, not "knowing" seems to be essentially impossible for most non-silent participants.
Or for the belief part, well, "it's not a lie if you believe it".
And as for the true bit, let's assume that there really is a cow, but before you can call someone over to verify your JTB, an alien abducts the cow and leaves a crop circle. Now all anyone sees is a paper-mache cow so you appear the fool but did have a true JTB - Schroedinger's JTB. Does it really matter unless you can convince others of that? On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?
JTB only exist to highlight bad assumptions, like being on the wrong side of a branch predictor. If you have a 0.9 JTB but get the right answer 0.1 times and don't update you assumptions, then you have a problem. One statue in a field? Not a big deal! *
* Unless it's a murder investigation and you're Sherlock Holmes (a truly powerful branch predictor).
Or, try renaming the variables and see if it still bothers you identically.
edit: And also the whole "is knowledge finite or infinite?". Is there ever a point at which we can explain everything, science ends and we can rest on our laurels? What then? Will we spend our time explaining hypotheticals that don't exist? Pure theoretical math? Or can that end too?
This is something that a lot of Greeks would have had issues with, most probably Heraclitus, and Protagoras for sure. Restricting ourselves to Aristotelian logic back in the day has been extremely limiting, so much so that a lot of modern philosophers cannot even comprehends how it is to look outside that logic.
But what world it would be if you could flip a coin on any choice and still survive! If the world didn't follow any self-consistent logic, like a Roger Zelazny novel, that would be fantastic. Not sure that qualifies as solipsism, but still. Would society even be possible? Or even life?
Here, as long as you follow cultural norms, every choice has pretty good outcomes.
From my point of view, "to know" is a subjective feeling, an assessment on the degree of faith we put on a statement. "Knowledge" instead is an abstract concept, a corpus of statements, similar to "science". People "know" false stuff all the time (for some definition of "true" and "false", which may also vary).
Not to mention what does it even mean for something to be false. For the hypothetical savage the knowledge that the moon is a piece of cheese just beyond reach is as true as it is for me the knowledge that it's a celestial body 300k km away. Both statements are false for the engineer that needs to land a probe there (the distance varies and 300k km is definitely wrong).
Ramachandran Capgras Delusion Case
https://www.youtube.com/watch?v=3xczrDAGfT4
> On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?
This is case of consensus reality (an intuition pump I borrowed from somewhere). Consensus reality is also respected in Quantum realm.
https://youtu.be/vSnq5Hs3_wI?t=753
while individual particles remain in quantum superposition, their relative positions create a collective consensus in the entanglement network. This consensus defines the structure of macroscopic objects, making them appear well-defined to observers, including Schrödinger's cat.
Presumably, there is a farmer who raised the cow, then purchased the papier-mâché, then scrounged for a palette of paints, and meticulously assembled everything in a field -- all for the purpose of entertaining distant onlookers.
That is software engineering. In Gettier's story, we're not the passive observers. We're the tricksters who thought papier-mâché was a good idea.
https://www.wikiwand.com/en/articles/Karl_Popper
read The problem of induction and demarcation: https://www.wikiwand.com/en/articles/Falsifiability
Basically to some it all up because we aren't "omniscient" nothing can in actuallity ever be known.
The issue that Gettier & friends is pointing to is that there are no examples where there is enough evidence. So under the formal definition it isn't possible to have a JTB. If you've seen enough evidence to believe something ... maybe you'd misinterpreted the evidence but still came to the correct conclusion. That scenario can play out at any evidence threshold. All else failing, maybe you're having an episode of insanity and all the information your senses are reporting are wild hallucinations but some of the things you imagine happening are, nonetheless, happening.
Are there any good examples of gettiers in software engineering that don't rely on understanding causality, where we're just talking about "what's there" not explaining "how it got there"?
The culprit was an embedded TrueType font that had what (I think) was a strange but valid glyph name with a double forward slash instead of the typical single (IIRC whatever generated the PDF just named the glyphs after characters so /a, /b and then naturally // for slash). Either way it worked fine in most viewers and printers.
The larger scale production printer on the other hand, like many, converted to postscript in the processor as one of its steps. A // is for an immediately evaluated name in postscript so when it came through unchanged, parsing this crashed the printer.
So we have a font, in a PDF, which got turned into Postscript, by software, on a certain machine which presumably advertised printing PDF but does it by converting to PS behind the scenes.
A lot of layers there and different people working on their own piece of the puzzle should have been 'encapsulated' from the others but it leaked.
But isn't the paper-mache cow case solved by simply adding that the evidence for the justification also needs to be true?
The definition already requires the belief to be true, that's a whole other rabbit hole, but assuming that's valid, it's rather obvious that if your justification is based on false evidence then it is not justified, if it's true by dumb luck of course it doesn't count as knowing it.
EDIT: Okay I see how it gets complicated... The evidence in this case is "I see something that looks like a cow", which I guess is not false evidence? Should your interpretation of the evidence be correct? Should we include into the definition that the justification cannot be based on false assumptions (existing false beliefs)? I can see how this would lead to more papers.
EDIT: I have read the paper and it didn't really change my view of the problem. I think Gettier is just using a sense of "justified" that is somewhat colloquial and ill defined. To me a proposition is not justified if it is derived from false propositions. This kind of solves the whole issue, doesn't it?
To Gettier it is more fuzzy, something like having reasonably sufficient evidence, even if it is false in the end. More like "we wouldn't blame him for being wrong about that, from his point of view it was reasonable to believe that".
I understand that making claims of the absolute truthfulness of things makes the definition rather useless, we always operate on incomplete evidence, then we can never know that we know anything (ah deja vu). But Gettier is not disputing the part of the definition that claims that the belief needs to be true to be known.
EDIT: Maybe the only useful definition is that know = believe, but in speech you tend to use "he knows P" to hint that you also believe P. No matter the justification or truthfulness.
EDIT: I guess that's the whole point that Gettier was trying to make: that all accepted definitions at the time were ill-defined, incomplete and rather meaningless, and that we should look at it closer. It's all quite a basic discussion on semantics. The paper is more flamebait (I did bite) than a breakthrough, but it is a valid point.
For the autofocus example, if the statement in question was "my patch broke the autofocus," it would not be Gettier because it is not true (the unrelated pushed changes did); if the statement in question was "my PR broke the autofocus," it would not be Gettier because it is JTB, and the justification (it was working before the PR, but not after) is correct, i.e., the cause of the belief, the perception, and deduction, are correct; Same if the statement in question was "the autofocus is broken."
It would be Gettier if the person reporting the bug was using an old (intact) version of the app but was using Firefox with a website open in another window on another screen, which was sending alerts stealing the focus.
The most common example of true Gettier cases in software dev is probably the following: A user reports a bug but is using an old version, and while the new version should have the bug fixed, it's still there.
The statement is "the current version has the bug." The reporter has Justified Belief because they see the bug and recently updated, but the reporter cannot know, as they are not on the newest version.
In classical logic statements can be true in and of themselves even if there as no proof of it, but in intuitionistic logic statements are true only if there is a proof of it: the proof is the cause for the statement to be true.
In intuitionistic logic, things are not as simple as "either there is a cow in the field, or there is none" because as you said, for the knowledge of "a cow is in the field" to be true, you need a proof of it. It brings lots of nuance, for example "there isn't no cow in the field" is a weaker knowledge than "there is a cow in the field".
https://fitelson.org/proseminar/gettier.pdf
It's really worth a read, it's remarkably short and is written in very plain language. It takes less than 10 mins to go through it.
The problem is that when you're working at such a low level as trying to define what it means to know something, even simple inferences become hellishly complicated. It's like trying to bootstrap a web app in assembly.
A cow-horse hybris is not a cow, it's a cow-horse hybrid.
A cow with a genetic mutation is a cow with a genetic mutation.
A cow created in a lab, perhaps even grown 100% by artificial means in-vitro is of course still a cow since it has the genetic makeup of a cow.
The word cow is the word cow, its meaning can differ based on context.
Things like this is why philosophers enjoy zero respect from me and why I'm an advocate for abolishing philosophy as a subject of study and also as a profession. Anyone can sit around thinking about things all day. If you spend money on studying it at a university you're getting scammed.
Also knowledge is finite based purely on the assumption that the universe is finite. An observer outside the universe would be able to see all information in the universe and they would conclude; you can't pack infinite amounts of knowledge into a finite volume.
A flat-earther may feel they "know" the earth is flat. I feel that i "know" that their feeling isn't "true" knowledge.
This is the simple case where we all (in this forum, or at least I hope so) agree. If we consider controversial beliefs, such as the existence of God, where Covid-19 originated or whether we have free will, people will often still feel they "know" the answer.
In other words, the experience of "knowinging" is not only personal, but also interpersonal, and often a source of conflicts. Which may be why people fight over the defintion.
In reality, there are very few things (if any) that can be "known" with absolute certainty. Anyone who has studied modern Physics would "know" that our intuition is a very poor guide to fundamental knowledge.
The scientific method may be better in some ways, but even that can be compromized. Also, it's not really useful for people outside the specific scientific field. For most people, scientific findings are only "known" second hard from seeing the scientists as authorities.
A bigger problem, though, is that a lot of people are misusing the label "scientific" to justify beliefs or propaganda that has only weak (if any) support from the use of hard science.
In the end, I don't think the word "knowledge" has any fundamental correspondence to something essential.
Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.
The types of "knowledge" that we feel we "know", to the extend that we learn them from others, seem to evolve in parallel to this as memes/memeplexes (using Dawkin's original use of "meme").
Such memes spread in part virously by pure replication. But if they convey advantages to the hosts they may spread more effectively.
For example, after Galilei/Newton, Physics provided several types of competitive advantage to those who saw it as "knowledge". Some economic, some military (like calculating artillery trajectories). This was especially the case in a politically and religously fragmented Europe.
The memeplex of "Science" seems to have grown out of that. Not so much because it produces absolute truths, but more because those who adopted a belief in science could reap benefits from it that allowed them to dominate their neighbours.
In other areas, religious/cultural beliefs (also seen as "knowledge" by te believers) seem to have granted similar power to the believers.
And it seems to me that this is starting to become the case again, especially in areas of the world where the government provides a welfare state to all that prevent scientific knowledge to grant a differential survival/reproductive advantage to those who still base their knowledge on Science.
If so, Western culture may be heading for another Dark Age....
I see no practical usefulness in all of these examples, except as instances of the rule that you can get correct results from incorrect reasoning.
From “it has the genetic makeup of a cow”, you’re saying that what make a cow a cow is the genetic makeup. But then part of that ADN defines the cow? What can vary, by how much, before a cow stops being a cow?
The point is that you can give any definition of “cow”, and we can imagine a thing that fits this definition yet you’d probably not consider a cow. It’s a reflection on how language relates to reality. Whether it’s an interesting point or not is left to the reader (I personally don’t think it is)
A tool for filling the fields with papier-mache cows.
- JTB is not enough, for something to be “true” it needs _testability_. In other words, make a prediction from your knowledge-under-test which would be novel information (for example, “we’ll find fresh cow dung in the field”). - nothing is really ever considered “true”, there’s only theories that describe reality increasingly correctly
In fact, physics did away with the J: it doesn’t matter that your belief is justified if it’s tested. You could make up a theory with zero justification (which doesn’t contradict existing knowledge ofc), make predictions and if they’re tested, that’s still knowledge. The J is just the way that beliefs are formed (inference)
For example, if I toss a coin and it comes up heads, put the coin in my pocket and then go about my day, and later on say to somebody "I tossed a coin earlier, and it came up heads", that is a JTB, but it's not testable. You might assume I'm lying, but we're not talking about whether you have a JTB in whether I tossed a heads or not, we're talking about if I have one.
There are many areas of human experience where JTB is about as good as we are going to get, and testability is off-limits. If somebody tells me they saw an alien climb out of a UFO last night, I have lots of reasons to not believe them, but if this a very trustworthy individual who has never lied to me about anything in my decades of experience of knowing them, I might have a JTB that they think this is true, even if it isn't. But none of it is testable.
Physics - the scientific method as a whole - is a superb way to think about and understand huge swathes of the World, but it has mathematically proven limits, and that's fine, but let's not assume that just because something isn't testable it can't be true.
How could something become true in the first place such that it could be tested to discover that it is true, if the test precedes and is a condition for truth?
Do you have tests I can run on each of your many assertions here that prove their truth?
Also no suprise the rabbit hole came from Haskell where those types (huh) are attracted to this more.foundational theory of computation.
I thought this was interesting:
> Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.
It is substantially hardware (the brain) and software (the culturally conditioned mind).
Rewind 100 years and consider what most people "knew" that black people were. Now, consider what most people nowadays "know" black people are not. So, definitely an improvement in my opinion, but if we can ever get our heads straight about racial matters I think we'll be well on our way to the second enlightenment.
First person know is belief. To some extent: this is just faith! Yes we have faith that the laws of physics wont change tomorrow, or we remember yesterday happened etc. Science tries to push that faith close to fact by verifying the fuck out of everthing. But we will never know why anything...
The other "know" is some kind of concept of absolute truth and a coincidence that what someone belives matches this. Whether that coincidence is chance or astute observations or in the paper's case: both.
But you may have conflated 'testability' and 'tested'. Can I know there is a cow in the field if I don't check? Seeing it was already evidence, testing just collects more evidence, so how can that matter? Should we set a certainty threshold on knowledge? Could be reasonable.
Maybe prediction-making is too strong to be necessary for 'knowing', if we allow knowing some fact in a domain of knowledge of which you're otherwise clueless. Although very reasonable to not call this knowledge. Suppose I learn of an mathematical theorem in a field that's so unfamiliar that I can't collect evidence to independently gain confidence in it.
Is this assertion not self-refuting though?
Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.
Unsurprisingly, gaining additional evidence can change our beliefs.
The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)
The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.
Now, software engineering:
We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:
- the software system is changed by other people - the software system has bugs (because it is non-trivial) - even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.
Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.
But it's also about fuzzy stuff which doesn't follow the A or not A logic.
With bayes, you're computing P(Model|Evidence) -- but this doesnt explain where Model comes from or why Evidence is relevant to model.
If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.
What's happening with animals is that we have a certain, deterministic, non-bayesian primitive model of our bodies from which we can build more complex models.
So we engage in causal reasoning, not bayesian updating: P(EvidenceCausedByMyBody| do(ActionOfMyBody)) * P(Model|Evidence)
It has been "thematically appropriated" by a certain sort of pop-philosophy, but it says nothing relevant.
Philosophy isnt the activity of trying to construct logical embeddings in deductive proofs. If any one ever thought so, then there's some thin sort of relevance, but no one ever has.
My favourite debugging technique is "introduce a known error".
This validates that your set of "facts" about the file you think you're editing are actually facts about the actual file you are editing.
For example: is the damn thing even compiling?
I could just as easily construct a problem in which I quietly turn off your background, which would mean your Zoom partner does possess knowledge while you do not, even though now it is you who thinks he does.
>certain, deterministic, non-bayesian primitive model of our bodies
What makes you certain the model of our body is non-Bayesian? Does this imply we have an innate model of our body and how it operates in space? I could be convinced that babies don't inherently have a model of their bodies (or that they control their bodies) and it is a learned skill. Possibly learned through some pseudo Bayesian process. Heck, the unathletic among us adults may still be updating our Bayesian priors with our body model, given how often it betrays our intentions :-)
Similarly, the real interesting stuff regards the reliability and predictive power of knowledge-producing mechanisms, not individual pieces produced by it.
Another analogy is confidence intervals, which are defined through a collective property, a confidence interval is an interval produced by a confidence process and the meat of the definition concerns the confidence process, not its output.
I always found the Gettier problems unimpressive and mainly a distraction and a language game. Watching out for smoke-like things to infer whether there is a fire is a good survival tool in the woods and advisable behavior. Neither it nor anything else is a 100% surefire way to obtain bulletproof capital-letter Truth. We are never 100% justified ("what if you're in a simulation?", "you might be a Boltzmann brain!"). Even stuff like math is uncertain and we may make a mistake when mentally adding 7454+8635, we may even have a brainfart when adding 2+2, it's just much less likely, but I'm quite certain that at least one human manages to mess up 2+2 in real life every day.
It's a dull and uninteresting question whether it's knowledge. What do you want to use the fact of it being knowledge or not for? Will you trust stuff that you determine to be knowledge and not other things? Or is it about deciding legal court cases? Because then it's better to cut the middle man and directly try to determine whether it's good to punish something or not, without reference to terms like "having knowledge".
...whoa. That makes complete sense.
So you're saying that there must be some form of meta-rationality that gives cues to our attempts at Bayesian reasoning, directing those attempts how to make selections from each set (the set of all possible models and the set of all sensory inputs) in order to produce results that constitute actual learning.
And you're suggesting that in animals and humans at least, the feedback loop of our embodied experience is at least some part of that meta-rationality.
That's an incredible one-liner.
Hello darkness my old friend…
In bayesian approaches it's assumed we have some implicity metatheory which gives us how the data relates to the model, so really all bayesian formulae should have an implicit 'Theory' condition which provides, eg., the actual probability value:
P(Model|Evidence, Theory(Model, Evidence))
The problem is there's no way of building such a theory using bayesianism, it ends in a kind of obvious regress: P(P(P(M|E, T1)|T2)|T3,...)
What theory provides the meaning of 'the most basic data'? ie., how it relates to the model? (and eg., how we compute such a probability).
The answer to all these problems is: the body. The body resolves the direction of causation, it also bootstraps reasoning.
In order to compute P(ShapeOfCup|GraspOnCup, Theory(Grasp, Shape)), I first (in early childhood) build such a theory by computing P(ShapeSensaton|do(GraspMovemnt), BasicTheory(BasicMotorActions, BasicSensations).
Were 'do' is non-bayesian conditioning, ie., it denotes the probability distribution which arises specifically from causal intervention. And "BasicTheory" has to be in-built.
In philosophical terms, the "BasicTheory" is something like Kant's synthetic a priori -- though there's many views on it. Most philosophers have realised, long before contemporary stats, that you cannot resolve the under-determination of theory by evidence without a prior theory.
That's how you get things like equipment operators insisting that you have to adjust the seat before the boot will open.
So for any given claim in Philosophy, if you could find a way to either (a) compare it to the world or (b) state it in unambiguous symbolic terms, then we'd stop calling it Philosophy. As a result it seems like the discipline is doomed to consist of unresolvable debates where none of the participants even define their terms quite the same way.
Crazy idea, or no?
"Some ungentle reader will check us here by informing us that philosophy is as useless as chess, as obscure as ignorance, and as stagnant as content. “There is nothing so absurd,” said Cicero, “but that it may be found in the books of the philosophers.” Doubtless some philosophers have had all sorts of wisdom except common sense; and many a philosophic flight has been due to the elevating power of thin air. Let us resolve, on this voyage of ours, to put in only at the ports of light, to keep out of the muddy streams of metaphysics and the “many-sounding seas” of theological dispute. But is philosophy stagnant? Science seems always to advance, while philosophy seems always to lose ground. Yet this is only because philosophy accepts the hard and hazardous task of dealing with problems not yet open to the methods of science—problems like good and evil, beauty and ugliness, order and freedom, life and death; so soon as a field of inquiry yields knowledge susceptible of exact formulation it is called science. Every science begins as philosophy and ends as art; it arises in hypothesis and flows into achievement. Philosophy is a hypothetical interpretation of the unknown (as in metaphysics), or of the inexactly known (as in ethics or political philosophy); it is the front trench in the siege of truth. Science is the captured territory; and behind it are those secure regions in which knowledge and art build our imperfect and marvelous world. Philosophy seems to stand still, perplexed; but only because she leaves the fruits of victory to her daughters the sciences, and herself passes on, divinely discontent, to the uncertain and unexplored."
I think there are plenty of philosophical problems that emerge from our desire to describe things in centralized ways. Consciousness, understanding and intelligence are three of them. I prefer "search" because it is decentralized, and cover personal/inter-personal and social domains. Search defines a search space unlike consciousness which is silent about the environment and other people when we talk about it. Search does what consciousness, understanding and intelligence are for. All mental faculties: attention, memory, imagination, planning - are forms of search. Learning is search for representation. Science is search, markets are search, even DNA evolution and protein folding are search. It is universal and more scientific. Search removes a lot of the mystery and doesn't make the mistake to centralize itself in a single human.
Read the example of the black swan in the wiki link.
That's arguably good. If you restrict yourself to something that you know is a valid method of ascertaining truth, then you have much higher confidence in the conclusion. The fact that we still struggle even with getting this restricted method shows that restrictions are necessary and good!
Then you bootstrap your way to a more comprehensive method of discourse from that solid foundation. Like Hilbert's program, which ultimately revealed some incredibly important truths about logic and mathematics.
"Anyone can sit around thinking about things all day" is like saying "anybody can sit and press keys on a keyboard all day".
I took a semester of philosophy at uni, perhaps the best invested time during my years there and by far more demanding than most of what followed. 100 % recommend it for anyone who wants to hone their critical reasoning skills and intellectual development in general.
Tumblr is loginwalled now, so I can't find the good version of this, but I'll try and rip it:
Philosophical questions like "what is knowledge" are hard precisely because everyone has an easy and obvious explanation that is sufficient to get them trough life.
But, when forced to articulate that explanation, people often find them to be incomparable with other people's versions. Upon probing, the explanations don't hold at all. This is why some ancient Greek thought experiments can be mistaken for zen koans.
Yeah, you can get by in life without finding a rigorous answer. The vast majority of human endeavor beyond subsistence can be filed under the category "I'm not sure I see the big deal."
To say that about the question of knowledge and then vamp for 200 words is not refusing to engage. It's patching up a good-enough answer to suit a novel challenge and moving on. Which is precisely why these questions are hard, and why some people are so drawn to exploring for an answer.
But none of that is actually true. Especially the part where it will have some sort of meaningful impact if we can just nail it down, let alone whether it would be beneficial or not.
There are many definitions of knowledge. From a perspective where you only know something if you are 100% sure about something and also abstractly "correct", which I call "abstract" because the whole problem in the first place is that we all lack access to an oracle that will tell us whether or not we are correct about a fact like "is there a cow in the field?" and so making this concrete is not possible, we end up in a very Descartian place where just about all you "know" is that you exist. There's some interesting things to be said about this definition, and it's an important one philosophically and historically, but... it also taps out pretty quickly. You can only build on "I exist" so far before running out of consequences, you need more to feed your logic.
From another perspective, if we take a probabilistic view of "knowledge", it becomes possible to say "I see a cow in the field, I 'know' there's a cow there, by which I mean, I have good inductive reasons to believe that what I see is in fact a cow and not a paper mâché construct of a cow, because inductively the probability that someone has set up a paper mâché version of the cow in the field is quite low." Such knowledge can be wrong. It isn't just a theoretical philosophy question either, I've seen things set up in fields as a joke, scarecrows good enough to fool me on a first glance, lawn ornamentation meant to look like people as a joke that fooled me at a distance, etc. It's a real question. But you can still operate under a definition of knowledge where I still had "knowledge" that a person was there, even though the oracle of truth would have told me that was wrong. We can in fact build on a concept of "knowledge" in which it "limits" to the truth, but doesn't necessarily ever reach there. It's more complicated, but also a lot more useful.
And I'm hardly exhausting all the possible interesting and useful definitions of knowledge in those two examples. And the latter is a class of definitions, not one I nailed down entirely in a single paragraph.
Again, I wouldn't accuse the most-trained philosophers of this in general, but the masses of philosophers also tend to spend a lot of time spinning on "I lack access to an oracle of absolute truth". Yup. It's something you need to deal with, like "I think, therefore I am, but what else can I absolutely 100% rigidly conclude?", but it's not very productive to spin on it over and over, in manifestation after manifestation. You don't have one. Incorporate that fact and move on. You can't define one into existence. You can't wish one into existence. You can't splat ink on a page until you've twisted logic into a pretzel and declared it that it is somehow necessary. If God does exist, which I personally go with "Yes" on, but either way, He clearly is not just some database to be queried whenever we wonder "Hey, is that a cow out there?" If you can't move on from that, no matter how much verbiage you throw at the problem, you're going to end up stuck in a very small playground. Maybe that's all they want or are willing to do, but it's still going to be a small playground.
If it's an ability that later develops independent of experience with the exterior world, it seems untestable. I.e., how can you test the theory without a baby being in the world in the first place?
Read this dialogue with ChatGPT to see why:
https://chatgpt.com/share/670e7f9e-d1d0-8001-b1ef-3f4cbc85b9...
It’s a bit long winded and gets into much more detail but I will post ChatGPT’s most relevant response below:
You’re right to point out that complexity alone doesn’t necessarily rule out deduction. Deduction can, in principle, work even in highly complex systems as long as the premises are perfectly known and logically valid. So the real issue with why deduction fundamentally does not exist in reality comes down to the nature of human knowledge and the way we interact with reality itself. Here’s why deduction struggles at a more fundamental level:
1. The Problem of Incomplete Knowledge
In mathematics and formal logic, deduction works because the premises are often abstract, well-defined, and complete within a given system (e.g., “All triangles have three sides”). In contrast, human knowledge of reality is never complete. We can never be sure we have all the relevant facts, laws, or variables. Even with the most advanced observational tools, there are always things we don’t know or can’t foresee.
• In mathematics: Premises like “All even numbers are divisible by 2” are universally true within that system.
• In reality: We might observe many instances of a phenomenon and think we know the rules, but there could always be exceptions or unknown factors (as in the Black Swan problem).
Because we cannot ever have perfect, complete premises about the world, any deductions we make based on our observations are always vulnerable to being undermined by new information.
2. Reality is Unbounded and Open-Ended
Mathematical systems and formal logic operate in closed systems with clearly defined rules. Reality, on the other hand, is open-ended and continuously evolving. There’s no fixed “set” of all knowledge about the universe. What we take to be the laws of nature today might change tomorrow with new discoveries, so the premises we use for deductive reasoning in the real world are inherently uncertain.
For instance, before the discovery of quantum mechanics, classical mechanics seemed to perfectly describe the physical world. Once quantum theory emerged, the premises on which classical deductions were made had to be re-examined.
• In a closed system (like math): You can set the premises and they remain stable.
• In reality: The premises are constantly subject to change or reinterpretation as we learn more.
3. The Problem of Infinite Regress (Uncertainty in Premises)
Even if you think you have solid premises to base deduction on, those premises themselves depend on other premises, which in turn depend on further premises. This infinite regress means that you can never be certain that your foundational premises are truly, absolutely correct.
For example:
• You might reason deductively that “all swans are white” because all observed swans have been white.
• But this premise itself is based on inductive observation, which is inherently fallible.
• Even if we had observed all swans ever seen by humans, we still couldn’t deduce that “all swans are white” without being omniscient, since future discoveries could prove otherwise.
In the end, any attempt to apply deduction to reality rests on premises that ultimately cannot be guaranteed to be perfectly, universally true, leading to a breakdown in the validity of deduction in real-world scenarios.
4. The Distinction Between Reality and Abstraction
Mathematics and logic are abstract constructs—they exist independently of the physical world and follow internally consistent rules. Reality, on the other hand, is not an abstract system; it is something we experience, observe, and interact with. This creates a fundamental mismatch:
• Abstractions (like mathematics) allow us to create premises and rules that are certain, because we define them.
• Reality doesn’t conform to these strict, definable rules—it involves uncertainty, chance, and emergent properties that abstractions can’t fully capture.
Because reality is not abstract, we cannot reduce it to a system of premises and rules in the same way we can with mathematics. Any attempt to do so will always miss something essential, undermining the validity of deduction in practice.
5. Chaos and Uncertainty in Physical Systems (ChatGPT is wrong here, I deleted it… it references chaos theory which is technically still deterministic, only quantum theory says things are fundamentally unknowable so ChatGPT is right from the perspective of fundamental uncertainty but he used chaos theory wrongly here in his reasoning)
Conclusion: Fundamental Uncertainty and Incompleteness
The fundamental issue with deduction in reality is that human knowledge is inherently incomplete and uncertain. Reality is an open, evolving system where new discoveries and unforeseen events can change what we thought we knew. Deduction requires absolute certainty in its premises, but in reality, we can never have that level of certainty.
At its core, the reason deduction doesn’t fully apply to reality is because reality is far more complex, open-ended, and fundamentally uncertain than the closed, abstract systems where deduction thrives. We cannot create the perfect, unchanging premises needed for deduction, and as a result, deductions in the real world are always prone to failure when confronted with new information or complexities we hadn’t accounted for.
Not a crazy idea – that is called logic. Which is a field of philosophy. Philosophy and math intersect more than many people think.
Even the Juris Doctor is a branch of philosophy. After all, what is justice?
Eg., it might be that the kind of "theory" which exists is un/pre-conscious. So that it takes a long time, comparatively, for the baby to become aware of it. Until the baby has a self-conception it cannot consciously form the thought "I am grasping" -- however, consciousness imv is a derivative-abstracting process over-and-above the sensory motor system.
So the P(Shape|do(Grasp), BasicTheory(Grasp, Shape)) actually describes something like a sensory-motor 'structure' (eg., a distribution of shapes associated with sensory-motor actions). The proposition that "I am grasping" which allows expressing a propositional confidence requires (self-)consciousness: P(Shape|"I have grasped", Theory(Grasp, Shape)) -- bayesianism only makes sense when the arguments of probability are propositions (since its about beliefs).
What's the relationship between the bayesian P(Shape|"I have...") and the causal P(Shape|do(Grasp)) ? The baby requires a conscious bridge from the 'latent structural space' of the sensory-motor system to the intentional belief-space of consciousness.
So P(Shape|do(Grasp)) "consciously entails" P(Shape| "I have..") iff the baby has to developed a theory, Theory(MyGrasping|Me)
But, perhaps counter-intutively, it is not this theory which allows the baby to reliably compute the shape based on knowing "its their action". It's only the sensory-motor system which needs to "know" (metaphorically) that the grapsing is of the shape.
Maybe a better way of putting it then is that the baby requires a procedural mechanism which (nearly-) guarentees that it's actions are causally associated with its sensations such that it's sensations and actions are in a reliable coupling. This 'reliable coupling' has to provide a theory, in a minimal sense, of how likely/relevant/salient/etc. the experiences are given the actions
It is this sort of coupling which allows the baby, eventually, to develop an explicit conscious account of its own existence.
If there's a bug - things on other levels will adapt to that bug, creating a "gettier" waiting to happen.
Other feedback-related concept is false independence. Imagine a guy driving a car over a hilly road with 90 mph speed limit. The speed of his car is not correlated with the position of the foot on the gas pedal (it's always 90 mph). On the other hand the position of the gas pedal and the angle of the road is correlated.
This is example popular in macroeconomics (to explain why central bank interest rates and inflation might seem to be independent).
Also I don't think this definition fits with people's intuition. At least, certainly not my own. There are times where I realise I'm wrong about something I thought I knew. When I look back, I don't say "I knew this, and I was wrong". I say "I thought I knew this, but I didn't actually know it".
E.g., If motor movement and causal inference are coupled, would you expect a baby born with locked in syndrome to have a limited notion of self?
***
justified: in the sense of deriving from evidence
true: because it doesn't make sense to "know" a falsehoood
belief: i.e., a proposition in your head
***
Justified: there is an error message
true: there is an error condition
belief: the engineer observes the message and condition
---
Where's my cow?
Are you my cow? [0]
0: https://www.amazon.com/Wheres-My-Cow-Terry-Pratchett/dp/0060...
For example, it feels like we have free will to many people, but the meaning is hard to pin down, and there are all sorts of arguments for and against that experience of being able to freely choose. And what that implies for things like punishment and responsibility. It's not simply an argument over words, it's an argument over something important to the human experience.
And to give a concrete example related to this as a whole, people should have known that getting to know something by not knowing it more and more is a valid epistemological take, just look at Christian Orthodox Isichasm and its doctrine about God (paraphrased it goes like this: the more you are aware of the fact that you don’t know God then the more you actually know/experience God”). Christian Orthodox Isichasm is, of course, in direct connection with neo-Platonism/Plotinism, but because the neo-Platonist “doctrine” on truth has never been mathematically formalized (presuming that that would even be possible) then the scientific world chooses to ignore it and only focuses on its own restricted way of looking at truth and, in the end, of experiencing truth.
This is not only testable, but central to neuroscience, and i'd claim, to any actual science of intelligence -- rather the self-aggrandising csci mumbojumbo.
On the testing side, you can lesion various parts of the sensory-motor system of mice, run them in various maze-solving experiments under various conditions (etc.) and observe their lack of ability to adapt to novel environments.
I think that, without using a definition of "knowing" that fits the description of definitions you are declaring useless, you won't be able to make any sense of either Gettier or TFA. So, however useful or useless you may find it in other contexts, in the context of trying to understand the debate, it's a very useful family of definitions of "knowing"; it's entirely necessary to your success in that endeavor.
A three-page paper that shook philosophy, with lessons for software engineers - https://news.ycombinator.com/item?id=18898646 - Jan 2019 (179 comments)
Whether you agree or disagree is a separate matter and something you can discuss or ponder for 5 minutes. The article is about taking a somewhat interesting concept from philosophy and applying it to a routine software development scenario.
Suppose you've got a class library with no source, and the documentation defines a get method for some calculated value. But suppose that what the get method actually does is return an incorrectly calculated value. You're not getting the right calculated value, but you're getting a calculated value none the less. But then finally suppose that in the same code is the right calculated value in unreachable code or an undocumented method.
On the one hand, you have a justified true belief that "the getter returns a calculated value": (1) you believe the getter returns a value; (2) that belief didn't come from nowhere, but is justified by you getting values back that look exactly like calculated values; (3) and the class does, in fact, have code in it to return a correctly calculated value.
As a matter of linguistic convenience, it's easier to say that relativity (or theory X) is right means that people who use relativity to make predictions make correct predictions as opposed to relativity itself being correct or incorrect.
"All swans are white."
This statement cannot be proven because it's not possible to observe all swans. There may be some swan in some hidden corner of the earth (or universe) that I did not see.
If I see one black swan, I have falsified that statement.
When you refer to "Not all swans are white" This statement can be proven true but why? This is because the original statement is a universal claim and the negation is a particular claim.
The key distinction between universal claims and particular claims explains why you can "prove" the statement "Not all swans are white." Universal claims, like "All swans are white," attempt to generalize about all instances of a phenomenon. These kinds of statements can never be definitively proven true because they rely on inductive reasoning—no matter how many white swans are observed, there’s always the possibility that a counterexample (a non-white swan) will eventually be found.
In contrast, particular claims are much more specific. The statement "Not all swans are white" is a particular claim because it is based on falsification—it only takes the observation of one black swan to disprove the universal claim "All swans are white." Since black swans have been observed, we can confidently say "Not all swans are white" is true.
Popper's philosophy focuses on how universal claims can never be fully verified (proven true) through evidence, because future observations could always contradict them. However, universal claims can be falsified (proven false) with a single counterexample. Once a universal claim is falsified, it leads to a particular claim like "Not all swans are white," which can be verified by specific evidence.
In essence, universal claims cannot be proven true because they generalize across all cases, while particular claims can be proven once a falsifying counterexample is found. That's why you can "prove" the statement "Not all swans are white"—it’s based on specific evidence from reality, in contrast to the uncertain generality of universal claims.
To sum it up. When I say nothing can be proven and things can only be falsified... it is isomorphic to saying universal claims can't be proven, particular claims can.
On small scales, GR and Newtonian mechanics make almost the same predictions, but make completely different claims about what exists in reality. In my view, if the theories made equally good predictions, but still differed so fundamentally about what exists, then that matters, and implies that at least one of the theories is wrong. This is more a realist, than an instrumentalist position, which perhaps is what you subscribe to, but tbh instrumentalism always seemed indefensible to me.
In that sense, it's also correct to say that physicists have knowledge of relativity and quantum mechanics. I don't think any physicist including Einstein himself thinks that either theory is actually true, but they do have knowledge of both theories in much the same way that one has knowledge of "Maxatar's conjecture" and in much the way that you have knowledge of what the flat Earth proposition is, despite them being false.'
It seems fairly radical to believe that instrumentalism is indefensible, or at least it's not clear what's indefensible about it. Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
What exactly is indefensible? The observation that working physicists don't really care about whether a physical theory is "real" versus trying to come up with formal descriptions of observed phenomenon to make future predictions, regardless of whether those formal descriptions are "real"?
If someone choses to engage in science by coming up with descriptions and models that are effective at communicating with other people observations, experimental results and whose results go on to allow for engineering advances in technology, are they doing something indefensible?
>Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
No, it was defensible, and that's exactly my point. Even though they didn't believe in the content of the theory (and ignoring the fact that they know a better theory), they do have knowledge of reality through it.
I don't think instrumentalism makes sense for reasons unrelated to this discussion. A scientist can hold instrumentalist views without being a worse scientist for it, it's a philosophical position. Basically, I think it's bad metaphysics. If you refuse to believe that the objects described by a well-established theory really exist, but you don't have any concrete experiment that falsifies it or a better theory, then to me it seems like sheer refusal to accept reality. I think people find instrumentalism appealing because they expect that any theory could be replaced by a new one that could turn out very different, and then they see it as foolish to have believed the old one, so they straight up refuse to believe or care what any theory says about reality. But you always believe something, whether you are aware of it or not, and the question is whether your beliefs are supported by evidence and logic.
Every other time you've been in that school building, the clocks have shown you the right time, so you feel very confident that the clocks on the wall are accurate.
But this time, you happen to be in the room with the non-functioning clock. It says "2:02" but by great good fortune, it actually happens to be 2:02.
So your belief is:
1. True. It actually is 2:02.
2. Justified. The vast majority of the time, if you see a clock on a wall in that building, it is working fine.
But should we say that you know the time is 2:02? Can you get knowledge of the time from a broken clock? Of course not. You just got lucky.
In order to count as knowledge, it has to be justified in the right way, which, alas, nobody has been able to specify exactly what that way should be. So far, nobody has come up with criteria which we can't find break in a similar way.
// all you can justify is that there is the likeness of a cow there //
If you see something which looks real, you are justified in believing it is real. If you see your friend walking into the room, sure, you've seen your friend's likeness in the room. But you are justified in believing your friend is in the room.
So if you see something that looks like a cow in a field, you are justified in believing there is a cow in a field, even though looks may be deceiving.
Also: how did you come to know all the things you claim to in your comment (and I suspect in a few others in your history)?
There's been some progress science must have missed out on then:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8207024/
That is one organization, many others claim they've also achieved the impossible.
First of all you have to be able to test your knowledge, you would test that the clock is correct for every minute of the day. If you missed any minutes then your knowledge is incomplete, you instead have probable knowledge, (using the same methods that physics uses to decide if an experimental result is real, you can assign a probability that the clock is correct).
Also, since when is knowledge absolute? You can never be completely certain about any knowledge, you can only assign (or try to assign) a probability that you know something, and testing your belief greatly increases the probability.
(PS. Thank you for the reply.)
Saved the tech team time chasing from a wild goose.
Out of curiosity, can you realize I am arguing from a much more advantageous position, in that I only have to find one exception to your popular "scientific organizations don't claim" meme (which I (and also you) can directly query on Google, and find numerous instances from numerous organizations), whereas you would have had to undertaken a massive review of all communications (and many forms of phrasing) from these groups, something we both know you have not done?
A (portion of) the (I doubt intentional or malicious) behavior is described here:
https://en.m.wikipedia.org/wiki/Motte-and-bailey_fallacy
I believe the flaw in scientists (and their fan base) behavior is mainly (but not entirely) a manifestation of a defect in our culture, which is encoded within our minds, which drives our behavior. Is this controversial from an abstract perspective?
It is possible to dig even deeper in our analysis here to make it even more inescapable (though not undeniable) what is going on here, with a simple series of binary questions ("Is it possible that...") that expand the context. I'd be surprised if you don't regularly utilize this form of thinking when it comes to debugging computers systems.
Heck, I'm not even saying this is necessarily bad policy, sometimes deceit is literally beneficial, and this seems like a prime scenario for it. If I was in power, I wouldn't be surprised if I too would take the easy way out, at least in the short term.
To know something in this sense seems to require several things: firstly, that the relevant proposition is true, which is independent of one's state of mind (not everyone agrees, but that is another issue...) Secondly, it seems to require that one knows what the relevant proposition is, which is a state of mind. Thirdly, having a belief that it is true, which is also a state of mind.
If we left it at that, there's no clear way to find out which propositions are true, at least for those that are not clearly true a priori (and even then, 'clearly' is problematic except in trivial cases, but that is yet another issue...) Having a justification for our belief gives us confidence that what we believe to be true actually is (though it rarely gives us certainty.)
But what, then, is justification? If we take the truth of the proposition alone as its justification, we get stuck in an epistemic loop. I think you are right if you are suggesting that good justifications are often in the form of causal arguments, but by taking that position, we are casting justification as being something like knowledge: having a belief that an argument about causes (or anything else, for that matter) is sound, rather than a belief that a proposition states a fact - but having a justified belief in an argument involves knowing that its premises are correct...
It is beginning to look like tortoises all the way down (as in Lewis Carroll's "What the Tortoise Said to Achilles".)
https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achi...
Sorry, naive questions: what is a terrifying dilemma to 21st century epistemologist? What is the "modern" recipe?
FP can be good for that but I often find that people get so carried away with the pure notion of functional code that they forget to make it obvious in its design. Way, way too much “clever” functional code out there.
The data structures are the key for many things, but a lot of software is all about handling side effects, where basically everything you touch is an input or an output with real world, interrelated global state.
That’s where correctly compartmentalising those state relationships and ample asserts or fail-soft/safe code practices become key. And properly descriptive variable names and naming conventions, with sparse but deep comments where it wasn’t possible to write the code to be self documented by its obvious nature.