←back to thread

597 points pizlonator | 1 comments | | HN request time: 0s | source
Show context
kragen ◴[] No.45135095[source]
Hmm, Fil-C seems potentially really important; there's a lot of software that only exists in the form of C code which it's important to preserve access to, even if the tradeoffs made by conventional C compilers (accepting large risks of security problems in exchange for a small improvement in single-core performance) have largely become obsolete.

The list of supported software is astounding: CPython, SQLite, OpenSSH, ICU, CMake, Perl5, and Bash, for example. There are a lot of things in that list that nobody is likely to ever rewrite in Rust.

I wonder if it's feasible to use Fil-C to do multitasking between mutually untrusted processes on a computer without an MMU? They're making all the right noises about capability security and nonblocking synchronization and whatnot.

Does anyone have experience using it in practice? I see that https://news.ycombinator.com/item?id=45134852 reports a 4× slowdown or better.

The name is hilarious. Feelthay! Feelthay!

replies(8): >>45135151 #>>45135324 #>>45135967 #>>45137459 #>>45139406 #>>45139586 #>>45139998 #>>45140957 #
pizlonator ◴[] No.45135151[source]
> I wonder if it's feasible to use Fil-C to do multitasking between mutually untrusted processes on a computer without an MMU?

You could. That said, FUGC’s guts rely on OS features that in turn rely on an MMU.

But you could make a version of FUGC that has no such dependency.

As for perf - 4x is the worst case and that number is out there because I reported it. And I report worst case perf because that’s how obsessive I am about realistically measuring, and then fanatically resolving, perf issues

Fact is, I can live on the Fil-C versions of a lot of my favorite software and not tell the difference

replies(5): >>45135179 #>>45135182 #>>45135319 #>>45136221 #>>45136740 #
modeless ◴[] No.45135182[source]
A Fil-C kernel that ran the whole system in the same address space, safely, would sure be something. Getting rid of the overhead of hardware isolation could compensate for some of the overhead of the software safety checks. That was the dream of Microsoft's Singularity project back in the day.

I guess there would be no way to verify that precompiled user programs actually enforce the security boundaries. The only way to guarantee safety in such a system would be to compile everything from source yourself.

replies(2): >>45135187 #>>45136217 #
pizlonator ◴[] No.45135187[source]
You could have enforcement that binaries use Fil-C rules suing proof carrying code
replies(1): >>45135298 #
kragen ◴[] No.45135298[source]
I'm skeptical that PCC can work in practice with existing social practices around software development, because neither users nor implementors can tell the difference between a correct verifier and one that has flaws that aren't being actively exploited yet, but they can sure tell the difference between a fast system and a slow one.

The incentives are strong to drive the runtime cost as close to zero as possible, which involves making your proof-checking system so expressive that it's hard to get right. The closer you get to zero, the more performance-sensitive your userbase gets. No part of your userbase is actively testing the parts of your verifier that reject programs; they try to avoid generating programs that get rejected, and as the verifier gets larger and larger, it requires more and more effort to generate code that exploits one of its bugs, although there are more and more of them. As the effort required to slip malicious code past the verifier grows, the secret in-house tools developed by attackers gives them a larger and larger advantage over the defenders.

Continued "maintenance" applied to the verifier drive its size and complexity up over time, driving the probability of a flaw inexorably toward 100%, while, if it is not "maintained" through continuous changes, it will break as its dependencies change, it will be considered outdated, and nobody will understand it well enough to fix a bug if it does surface.

We've seen this happen with Java, and although it's clearly not unavoidable in any kind of logical sense, it's a strong set of social pressures.

Dynamic checking seems much more memetically fit: developers will regularly write code that should fail the dynamic checks, and, if it passes instead, they will send you an annoyed bug report about how they had to spend their weekend debugging.

replies(1): >>45136428 #
zozbot234 ◴[] No.45136428{3}[source]
Proof carrying code is efficient at runtime; the verifier only ever has to run once to verify the binary, which in principle can simply be done at install. Even the verification itself can be fast enough because it's only checking the soundness of existing proofs, not having to generate new ones from scratch. Where there are genuine needs to "make the proof system more expressive", this can be done by adding trusted axioms which will generally be simple enough to manually check; the verifier itself need not become more complex.

We've seen a lot of complexity happen with Java but that was generally in the standard library facilities that are bundled with the language and runtime, not really the basic JVM type checking pass. Proof carrying code is closer to the latter.

replies(1): >>45136677 #
kragen ◴[] No.45136677{4}[source]
I agree that the verifier performance is not important to runtime performance (though there have been some changes to the JVM in particular to speed up class loading), but the expressivity of the safety proofs it can check is, because safety properties that cannot be proven at load time must be checked at runtime. Initially, and I think still, the JVM's downcast on loading an object reference from a generic container like ArrayList<Price> is an example: javac has proven statically that it is safe, but the JVM bytecode cannot express that. Pointer nullability in JVM languages like Kotlin is another example: Kotlin knows most object pointers can't be null, but the JVM still has to check at runtime.

There have been numerous holes discovered in various implementations of the basic JVM type checking, often after existing for many years.

replies(2): >>45138271 #>>45140373 #
gf000 ◴[] No.45138271{5}[source]
I mean, since Gödel we pretty much have known that there could never be a system "without holes".
replies(1): >>45138404 #
kragen ◴[] No.45138404{6}[source]
No, that is not correct. Gödel showed that some theorems are unprovable in a consistent formal axiomatic system; he did not show that no consistent formal axiomatic systems exist.
replies(2): >>45139198 #>>45139969 #
naasking ◴[] No.45139198{7}[source]
> Gödel showed that some theorems are unprovable in a consistent formal axiomatic system...

...of sufficient power, eg. that can model arithmetic with both addition and multiplication. I think the caveats are important because systems that can't fully model arithmetic are often still quite useful!

replies(1): >>45139977 #
gf000 ◴[] No.45139977{8}[source]
Indeed! But I am afraid general purpose programming languages almost always need that kind of power (though being Turing complete is not necessary)
replies(1): >>45185973 #
1. naasking ◴[] No.45185973{9}[source]
They need it operationally for calculations and such, but those calculations don't necessarily need to be statically validated beyond simple things like type and unit checking.