Plus there’s network shares. Multiple people in my home with linux PCs, each with their own slice of the NFS pie based on user perms. Sure, it’s not secure, but these are people I live with, not state-sponsored hackers.
All that said, I’d also love a simpler single-user perm setup. For VMs, containers, etc it would be amazing
Alternatively, containers really are a no-thinking-required solution. Everything maximally isolated by default.
Ephemeral setups (amongst which k8s) remove that need but introduce a big load of other stuff.
Having a VPS that is managed by sysadmins (users with sudo rights, authed with keys) and on which partly overlapping "deploy" users can write to small parts and maybe do a passwordless "sudo sysctl restart fooapp" but only that, is a nice and simple setup.
I manage at least seven of these. And nothing in me even considers porting this to my k8s infra.
Edit: The reason for this setup is simple and twofold: deploy is safe and clear: deployers can be confident that whatever crap they pull, the server will churn on, data will be safe, recovery is possible. And all devs/ops having their own keys and accts gives a trail, logs and makes it very easy to remove that contractor after she did her work.
HVM hypervisors were doing stronger, safer, better isolation than Docker was 10 years ago. They are certainly no-thinking required though which leads to the abysmal state of containerized security and performance we have currently.
I guess it depends on the servers. I'm in academic/research computing and single-user systems are the anomaly. Part of it is having access to beefier systems for smaller slices of time, but most of it is being able to share data and collaboration between users.
If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.
This is overwhelmingly the view for business and personal users. Settings like what you described are very rare nowadays.
No corporate IT department is timesharing users on a mainframe. It's just baremetal laptops or VMs on Windows with networked mountpoints.
When you have one OS that is used on devices from phones, to laptops, to servers, to HPC clusters, you're going to have this friction. Could Linux operate in a single-user mode? Of course. But does that really make sense for the other use-cases?
I've used nixos and I don't really see how nixos is special apart from the declarative config. The same can/should be done with any distro and any config manager.
And unless you were running Podman in rootless mode, the same setup applies to containers too.
Where i work, we have a lot of physical machines. The IT staff own the root account, and development teams get some sort of normal user accounts, with highly restricted sudo.
We could significantly simplify things by getting rid of the account system. The same could be said for a lot of systems like database servers. Typically it's just one database, one user (your application server) with full access. The account system is mostly an annoyance.
For big company use cases where you want to reduce attack surface, why not spawn a second server with different credentials? Anyway big companies typically have many database servers in a cluster and the same credentials are shared by many server processes... The tendency there is literally in the opposite direction.
You can also disable it in the sudoers file.
This is a terrifying way to access databases.
Super user, A Modify user (just below super but cant delegate rights) for schema changes. A read/write app user... Probably a pile of read only users who, have audit trails... You might want some admins or analytics users (who have their own scheme additions).
The words security and audit trails all spring to mind.
Yes. First, we use user level container systems like apptainer/singularity, and these containers run under the user itself.
This is also same for non academic HPC systems.
From schedulers to accounting, everything is done at user level, and we have many, many users.
It won’t change anytime soon.
That's good! We don't patch executable binaries these days: we just compile a new one from source, when we made a change. Similarly, more and more we just build new systems (or their images) from source, instead of mucking around with existing systems.
In principle, you can have just exactly the binary (or binaries) you need in the container or VM, without having a full Linux install.
See eg Unikernels like Mirage.
For me and my home network, if the default security mode is “manage users yourself”, i chmod -R 777 on all applicable files and call it a day. Nixos lets me be lazy, as all nixos modules (that I’ve ever used) have their own user setups with minimal permissions by default
Not a mainframe perhaps, but this sentiment is flat wrong otherwise, because that is how Citrix and RDS (fka Terminal Server) do app virtualization. It's an approach in widespread use both for enterprise mobile/remote access, and for thin clients in point of sale or booth applications. What's more, a *nix as the underlying infrastructure is far from unusual.
I have first-hand insider knowledge of two financial institutions that prefer this delivery model to manage the attack surface in retail settings, and a supermarket chain that prefers it because employee theft is seen as a problem. It’s also a model that is easy to describe and pitch to corporate CIOs, which is undoubtedly a merit in the eyes of many project managers.
One of the above financial institutions actually does still have an entire department of users logged in to an S/390 rented from IBM. They’ve been trying to discontinue the mainframe for years. I’m told there are similar continuing circumstances in airline reservations and credit card schemes; not just transaction processing, but connected interactive user sessions.
This is what corporate IT actually looks like. It is super different to the tech environments and white-collar head offices many of us think are the universal exemplar.
FWIW, shells have a (configurable) history file. I'm not sure how it compares to sudo's logging though. I also personally perform little day to day admin tasks (I don't have as much time nor interest to toy around as I used to, and my current setup has been sufficient for about a decade).
> Nothing worse than ansible with its “sudo /tmp/whatever.sh” which hides what it’s doing.
That's a nightmare indeed; for sensitive and complex-enough tasks requiring a script, those scripts should at least be equipped with something as crude as a ``log() { printf ... | tail $logfile`` }.
Sudo exists to execute commands with a different user. It's an abbreviation of "switch user (then) do" for a reason.
Most daemons run under a specific user. Things like docker that use a root Daemon are a security nightmare.
That's the thing, with NixOS you usually don't have to explicitly setup users and permissions. For most simple services, the entire setup is a single line of code in your NixOS configuration. E.g.
services.uptime-kuma.enable = true;
will make sure that your system is running an uptime-kuma instance, with its own user and all.Some more complex software might require more configuration, but most of the time user and group setup is not part of that.
Are they? My understanding was that by default, the `dockerd` (or whatever) is root and then all containers map to the same non-privileged user.
I understand academia has lots of different accounts.
Hence all containers are isolated from each other, not only at process level, but at user + cgroup level too.
Apptainer: https://apptainer.org
[I'm all for replacing notions of privileges/permissions with capabilities.]
Second, there are the jobs users submit. These are often executed on separate nodes and the usage is managed. Here you have both user and cgroup limits in place. The cgroups make sure that the jobs on have the required resources. The user authentication makes sure that the job can read/write data as the user. This was the user can work with their data on the interactive nodes.
So the two different systems have different rationales, and both are needed. It all depends on the context.
The primary point of user-authentication is that we need to be able to read/write data and programs. So you have to have a user-level authentication mechanism someplace to be able to read and write data. cgroups are used primarily for restricting resources, so those two sets of restrictions are largely orthogonal to each other.
Second, user-authentication is almost always backed (at least on interactive nodes) by an LDAP or some other networked mechanism, so I'm not sure what "cloud" or "k8s" really adds here.
If you're trying to say that we should just run HPC jobs in the cloud, that's an option. It's not necessarily a great option from a long-term budget perspective, but it's an option.