https://en.wikipedia.org/wiki/Capability-based_security
that you had in AS/400 or the iAPX 432 where a "capability" is a reference to a system object with associated privileges. It is possible to get this into a POSIX-like system
https://en.wikipedia.org/wiki/Capsicum_(Unix)
It reminds me of using a VAX-11/730 with the VMS operating system in high school where there was a long list of privileges a process could have
https://hunter.goatley.com/vax-professional-articles/vax-pro...
and it was a common game to investigate paths such as "if you have privilege A, B, and C you can get SETPRV and take over the machine"
The simplicity of pledge is good enough for 99% of use-cases I'd wager AND easy to add to existing code.
If a piece of important or foundational software wants to lock itself down today, look at the myriad of convoluted "solutions" mentioned in a sibling comment. If you wanted to discourage progress in this area, that's how you'd design something. I'm not assuming malice, obviously, but it's certainly a product of the endless nitpicking and "not good enough, doesn't cover <niche usecase>" type of thinking.
EDIT:
> and the ones that do will likely be lazy
I'd argue the opposite, any developer taking the time to add some pledge calls to their code is probably mindful of security and wants to improve it. If you wanted to be lazy, you'd just... not implement pledge at all since it'd get in your way and be too restrictive.
The one thing that makes capabilities usable is that they don't need to follow that rule.
If you don't have processes that let your programs get capabilities from any source other than their creation, you are better just adding your program names into your ACLs.
Meanwhile, complex external systems like SELinux end up being unused because they are complex and external (and thus can just be ignored).
If I grant something root, I know what that means and I'll be very careful. But if I grant something permission X thinking I'm safe, and then it can be used to gain permission Y, or even root, then I can be accidentally exposed.
There is just a much larger surface area to guard against, ensuring that each granular permission can't be so exploited.
https://github.com/jart/pledge
https://justine.lol/pledge/
pledge() allows developers to further restrict a program dynamically at runtime. More like defensive driving.
Both are useful techniques.
The only LSM I have much experience is SELinux, which capabilities directly as SELinux permissions. I imagine most other general purpose LSMs do simmilar.
I could imagine an LSM that implements a policy of allowing capabilties based on UID/GID; although I'm not aware of any current LSMs that do that.
With coarse-grained permissions you end up needing proxies, which is nice because you can do whatever you want in terms of business logic, but also very much not nice because you have to write the proxies and the client code to talk to them, and then you have to keep maintaining and extending them.
Either way you have to do audits and static analysis looking for escalation vectors, and that's strictly harder -but not insurmountable- with fine-grained permissions.
So I think fine-grained permissions win.
I've implemented (and I'm trying to get approval to publish a paper on) a cross between RBAC-style and SMACK-style labeled security (where the labels are literal human-readable strings, not MLS-style bitmaps + levels) for application-level authorization, but it's very fast and should work in-kernel too if anyone wanted to make it work there. This system lets you have authorization as fine- or coarse-grained as you want by using many distinct labels (fine-grained) or few labels (coarse-grained) to label the application (or kernel) objects with.
For example, you can pass a program a capability to bind any privileged port, but not a specific one. For this scenario, just passing an fd bound to the port is actually much simpler and safer. For other capabilities, they’re just too coarse.
The fact that capabilities are implicitly inherited also doesn’t sound like a good approach on security. It’s likely like this due to backward compatibility, but I really think that capabilities ought to be passed explicitly, and we should be able to transfer them between processes. In fact, using an fd as a handle for capabilities would probably be a much clearer and explicit approach.
It doesn't. You can download malware and the app can cryptolock your entire system. Sure, if the malware called pledge to block opening files but what malware is going to do that?
Wdym? It's very notably used in Android
I also doubt you can take pictures of me when it doesn't have cameras attached. If it did and you were to take pictures, you'd see some blinking leds and cables all day.
And I highly doubt you could take remote control even if I had openssh open to the public.
Perhaps your industry just doesn't care about the same things the openbsd community does.
Edit: I missed the ssh key stealing. My keys are always encrypted.
I have never seen SELinux used on a regular server. Heck, Amazon Linux AMIs on AWS even disable it by default.
Yeah, yeah, personal experience and all that.
But Linux "capabilities" do not address this. If you have the permission, you have the permission. And can do the action. Even if the reason why you are trying to do the action (needed for A's request) doesn't match the reason that you are able do it (needed to do things for B).
The setuid binaries already existed, and this was a means to making them (much) more secure without API changes.