"The encapsulation referred to here is that you can expose a safe API that is impossible to misuse in a way that leads to undefined behavior. That's the succinct way of putting it anyway."
Well, no, actually.
At least, not in an (IMHO) useful way.
I can break your safe API by getting the constraints wrong on unsafe code inside that API.
Also, unsafe usage elsewhere is not local. I can break your impossible to misuse API through an unsafe API that someone else used elsewhere, completely outside my control, and then wrapped in a safe API.
Some of these are of course, bugs in rust/compiler, etc. I'm just offering i've yet to hear the view taken that the ability to do this is always a bug in the language/compiler, and will be destroyed on sight.
Beyond that:
To the degree this is useful encapsulation for tracking things down, it is only useful when the amount is small and you can reason about it.
This is simply no longer true in any reasonably sized rust app.
As a result, as you say, it is then only useful for saying who is at fault in the sense of whether i'm holding it wrong. To me, that is basically worthless at scale.
"I'm surprised that someone with as much experience as you is missing this nuance."
I don't miss it - I just don't think it's as useful as claimed.
This level of "encapsulation", which provides no real guarantee except "the set of bugs is caused somewhere by the set of unsafe blocks" is fairly unhelpful at large scale.
I have audited hundreds of thousands of lines of rust code to find bugs caused by unsafe usage. The thing that made it at all tractable was not this form of encapsulation - it was in fact, 100% worthless in doing that at scale because it was till tons and tons and tons of code to try to reason about, across lots of libraries and dependencies. As you say, it only helps provide blame once found, and blame is not that useful at scale. It does not make the code safer. It does not make it easier to track down. It only declares, that after i've spent all the time, that it is not my fault. But also nobody has to do anything anyway.
For small programs, this buys you something, as i said, as long as the set of unsafe blocks is small enough to be tractable to audit, cool. You can find bugs easier. In that sense, the tons of hobby programs, small libraries, etc, are a lot less likely to have bugs when written in rust (modulo their dependencies on unsafe code).
But like, your position seems to be that it is fairly useful that i can go to a library and tell them "your crap is broken", and be right about it.
To me, this does not buy a lot in the kinds of large complex systems rust hopes to replace in C/C++.
(it also might be false)
In actually tracking down the bug, which is what i care about, the thing that was useful is that i could run miri and lots of other things on it and get useful results that pointed me towards the most likely causes of issues..
So don't get me wrong - this is overall better than C, but writing lots of rust (i haven't written C/C++ at all in a while, actually) I still tire of the constant claims of the amount of rust safety. You are the rare rust person who understand the nuance and is willing to admit there is any flaw or non-perfection whatsoever.
A you say, there are lots of things that ought to be true in rust that are not.
You have a good understanding of this nuance, and where it fails.
But it is you, i believe, who is missing the forest for the trees, because most do not have this.
I'll be concrete and i guess controversial in a way you are 100% free to disagree with, but might as well throw a stake in the ground - it's hacker news, might as well have fun making a comment someone can beat me over the head with later: If nothing changes, and the rust ecosystem grows by a factor of 100x while changing nothing about how it behaves WRT to unsafe usage, and no tooling gets significantly better, Rust will not end up better than C in practice. I don't mean - it will not have less bugs/vulnerabilities - i think it would by far!
But whether you have 100 billion of them, or 1 billion of them, and thus made a 100x improvement, i don't think matters too much when it's still a billion :)
Meanwhile, if the rust ecosystem got worse about unsafe, but made tools like Miri 50x faster (and made more tools like it that help verification in practice), it will not end up better than C.
To me - it is the tooling, and not this sort of encapsulation, that will make a practical difference or not at scale.
The idea that you will convince people not to write broken unsafe code, in ways that breaks safe APIs, or that the ability to assign blame matters, is very strange to me, and is no better than C. As systems grow, the likelihood of totally safe transmutes growing in them is basically 100% :)
FWIW - I also agree you don't have to be perfect, nor do I fault rust for not being perfect. Instead, i simply disagree that at scale, this sort of ability to place blame is useful.
To me, it's the ability to find the bugs quickly and as automated as possible that is useful.
I need to find the totally safe transmutes causing issues in my system, not hand it to someone else after determining it couldn't be my fault.