But none of that is actually true. Especially the part where it will have some sort of meaningful impact if we can just nail it down, let alone whether it would be beneficial or not.
There are many definitions of knowledge. From a perspective where you only know something if you are 100% sure about something and also abstractly "correct", which I call "abstract" because the whole problem in the first place is that we all lack access to an oracle that will tell us whether or not we are correct about a fact like "is there a cow in the field?" and so making this concrete is not possible, we end up in a very Descartian place where just about all you "know" is that you exist. There's some interesting things to be said about this definition, and it's an important one philosophically and historically, but... it also taps out pretty quickly. You can only build on "I exist" so far before running out of consequences, you need more to feed your logic.
From another perspective, if we take a probabilistic view of "knowledge", it becomes possible to say "I see a cow in the field, I 'know' there's a cow there, by which I mean, I have good inductive reasons to believe that what I see is in fact a cow and not a paper mâché construct of a cow, because inductively the probability that someone has set up a paper mâché version of the cow in the field is quite low." Such knowledge can be wrong. It isn't just a theoretical philosophy question either, I've seen things set up in fields as a joke, scarecrows good enough to fool me on a first glance, lawn ornamentation meant to look like people as a joke that fooled me at a distance, etc. It's a real question. But you can still operate under a definition of knowledge where I still had "knowledge" that a person was there, even though the oracle of truth would have told me that was wrong. We can in fact build on a concept of "knowledge" in which it "limits" to the truth, but doesn't necessarily ever reach there. It's more complicated, but also a lot more useful.
And I'm hardly exhausting all the possible interesting and useful definitions of knowledge in those two examples. And the latter is a class of definitions, not one I nailed down entirely in a single paragraph.
Again, I wouldn't accuse the most-trained philosophers of this in general, but the masses of philosophers also tend to spend a lot of time spinning on "I lack access to an oracle of absolute truth". Yup. It's something you need to deal with, like "I think, therefore I am, but what else can I absolutely 100% rigidly conclude?", but it's not very productive to spin on it over and over, in manifestation after manifestation. You don't have one. Incorporate that fact and move on. You can't define one into existence. You can't wish one into existence. You can't splat ink on a page until you've twisted logic into a pretzel and declared it that it is somehow necessary. If God does exist, which I personally go with "Yes" on, but either way, He clearly is not just some database to be queried whenever we wonder "Hey, is that a cow out there?" If you can't move on from that, no matter how much verbiage you throw at the problem, you're going to end up stuck in a very small playground. Maybe that's all they want or are willing to do, but it's still going to be a small playground.
For example, it feels like we have free will to many people, but the meaning is hard to pin down, and there are all sorts of arguments for and against that experience of being able to freely choose. And what that implies for things like punishment and responsibility. It's not simply an argument over words, it's an argument over something important to the human experience.
Whether you agree or disagree is a separate matter and something you can discuss or ponder for 5 minutes. The article is about taking a somewhat interesting concept from philosophy and applying it to a routine software development scenario.
Also: how did you come to know all the things you claim to in your comment (and I suspect in a few others in your history)?
There's been some progress science must have missed out on then:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8207024/
That is one organization, many others claim they've also achieved the impossible.
Out of curiosity, can you realize I am arguing from a much more advantageous position, in that I only have to find one exception to your popular "scientific organizations don't claim" meme (which I (and also you) can directly query on Google, and find numerous instances from numerous organizations), whereas you would have had to undertaken a massive review of all communications (and many forms of phrasing) from these groups, something we both know you have not done?
A (portion of) the (I doubt intentional or malicious) behavior is described here:
https://en.m.wikipedia.org/wiki/Motte-and-bailey_fallacy
I believe the flaw in scientists (and their fan base) behavior is mainly (but not entirely) a manifestation of a defect in our culture, which is encoded within our minds, which drives our behavior. Is this controversial from an abstract perspective?
It is possible to dig even deeper in our analysis here to make it even more inescapable (though not undeniable) what is going on here, with a simple series of binary questions ("Is it possible that...") that expand the context. I'd be surprised if you don't regularly utilize this form of thinking when it comes to debugging computers systems.
Heck, I'm not even saying this is necessarily bad policy, sometimes deceit is literally beneficial, and this seems like a prime scenario for it. If I was in power, I wouldn't be surprised if I too would take the easy way out, at least in the short term.