Most active commenters

    ←back to thread

    167 points yarapavan | 12 comments | | HN request time: 0.428s | source | bottom
    1. gqgs ◴[] No.43550402[source]
    A key concern I've consistently had regarding formal verification systems is: how does one confirm the accuracy of the verifier itself?

    This issue appears to present an intrinsically unsolvable problem, implying that a formally verified system could still contain bugs due to potential issues in the verification software.

    While this perspective doesn't necessarily render formal verification impractical, it does introduce certain caveats that, in my experience, are not frequently addressed in discussions about these systems.

    replies(8): >>43550610 #>>43550706 #>>43550740 #>>43552467 #>>43553529 #>>43553826 #>>43554265 #>>43558410 #
    2. whattheheckheck ◴[] No.43550610[source]
    Yeah it reminds me of https://en.m.wikipedia.org/wiki/Tarski%27s_undefinability_th...
    3. alexisread ◴[] No.43550706[source]
    I'd guess the best approach is similar to security - minimal TCB (trusted computing base) that you can verify by hand, and then construct your proof checker on top, and have it verify itself. Proof and type checkers have a lot of overlap, so you can gain coverage via the language types.

    I can't recall exactly, but I think CakeML (https://cakeml.org/jfp14.pdf) had a precursor lisp prover, written in a verified lisp, so would be amenable to this approach.

    EDIT: think it was this one https://www.schemeworkshop.org/2016/verified-lisp-2016-schem...

    replies(1): >>43551066 #
    4. csbartus ◴[] No.43550740[source]
    Perhaps there is no such thing like absolute truth.

    In category theory / ologs, a formal method for knowledge representation, the result is always mathematically sound, yet ologs are prefixed with "According to the author's world view ..."

    On the other way truth, even if it's not absolute, it's very expensive.

    Lately AWS advocates a middle ground, the lightweight formal methods, which are much cheaper than formal methods yet deliver good enough correctness for their business case.

    In the same way MIT CSAIL's Daniel Jackson advocates a semi-formal approach to design likely-correct apps (Concept design)

    It seems there is a push for better correctness in software, without the aim of perfectionism.

    5. guerrilla ◴[] No.43551066[source]
    This is correct. That's why you have core type theories. Very small, easy to understand and implement in a canonocal way. Other languages can be compiled to them.
    6. newAccount2025 ◴[] No.43552467[source]
    A deeply considered take: this is theoretically interesting but in practice matters very little.

    It’s like worrying about compiler bugs instead of semantic programming errors. If you are getting to the point where you have machine checked proofs, you are already doing great. It’s way more likely that you have a specification bug or modeling error than an error in the solver.

    If you are worried about it, many people have solutions that are very compelling. There are LCF style theorem provers like HOL Light with tiny kernels. But for any real world systems, none of this matters, use any automated solvers you like, it will be fine.

    7. solidsnack9000 ◴[] No.43553529[source]
    ...it does introduce certain caveats that, in my experience, are not frequently addressed in discussions about these systems.

    What are those?

    As far as I can tell, people don't deploy verified or high assurance systems without testing or without online monitoring and independent fail safes.

    8. ninetyninenine ◴[] No.43553826[source]
    This is not just a problem with formal verification systems. It's a problem intrinsic to math, science and reality.

    We call these things axioms. The verifier must be assumed to be true.

    For all the axioms in mathematics... we don't know if any of those axioms are actually fully true. They seem to be true because we haven't witnessed a case where they aren't true. This is similar to how we verify programs with unit tests. For the verifier it's the same thing, you treat it like an axiom you "verify" it with tests.

    replies(1): >>43554718 #
    9. Ar-Curunir ◴[] No.43554265[source]
    There are many things people try to do here: multiple implementations of the verifier, formally verified implementations of the verifier in other verification tools, by-hand verification, etc.
    10. gqgs ◴[] No.43554718[source]
    I believe there is a distinction between those concepts.

    Axioms are true by their own definition. Therefore, discovering an axiom to be false is a concept that is inherently illogical.

    Discovering that a formal verification system produced an incorrect output due to a bug in its implementation is a perfectly well-defined concept and doesn't led to any logical contradictions; unless you axiomatically define the output of formal verification systems to be necessarily correct.

    I believe this definition don't make sense in the general case considering the number of vectors that can introduce errors into software.

    replies(1): >>43556225 #
    11. ninetyninenine ◴[] No.43556225{3}[source]
    Being true by definition is just assigning something to be true by assumption.

    Who makes a definition? Are definitions intrinsic to reality? No. Definitions are made up concepts from language which is also another arbitrary made up thing. When something is defined to be an axiom the human is making something up and arbitrarily assigning truth to that concept.

    > I believe this definition don't make sense in the general case considering the number of vectors that can introduce errors into software.

    You’re making up a definition here. Which is the same as an axiom.

    Your question is what verifies the verifier? But that question is endless because what verifies the verifier of the verifier? It’s like what proves the axiom? And what proves the proof of the axiom?

    The verifier defines the universe of what is correct and what is incorrect. It is the axiom. It is an assumption at its core. You can pretty it up by calling it a more advanced thing like “definition” but in the end it is the exact same thing as assuming and making up a fact that something is true without any proof.

    12. trenchgun ◴[] No.43558410[source]
    Verifiers can be based on a small proven kernel. That is not really the issue.

    The issue is writing the formal specification. Formal verification is in nutshell just proving the equivalence of two different formal constructs: one that is called the spec and one that is called the system (or program). Writing specs is hard.