Most active commenters
  • airocker(19)
  • mbivert(4)
  • mbreese(4)
  • bayindirh(4)
  • superq(4)
  • whimsicalism(3)

←back to thread

466 points CoolCold | 90 comments | | HN request time: 1.107s | source | bottom
1. airocker ◴[] No.40215819[source]
I have seldom come across unix multiuser environments getting used anymore for servers. Its generally just one user on one physical machine now a days. I understand run0's promise is still useful but i would really like to see the whole unix permission system simplified for just one user who has sudo access.
replies(17): >>40215898 #>>40216049 #>>40216052 #>>40216221 #>>40216591 #>>40216746 #>>40216794 #>>40216847 #>>40217413 #>>40217462 #>>40218411 #>>40219644 #>>40219888 #>>40220264 #>>40221109 #>>40223012 #>>40225619 #
2. rpgwaiter ◴[] No.40215898[source]
NixOS may be helping multiuser make a comeback, at least it is for me and my home servers. I no longer have to containerize my apps, i can have one baremetal server with a dozen+ services, all with their own users and permissions, and i don't have to actually think about any of the separation.

Plus there’s network shares. Multiple people in my home with linux PCs, each with their own slice of the NFS pie based on user perms. Sure, it’s not secure, but these are people I live with, not state-sponsored hackers.

All that said, I’d also love a simpler single-user perm setup. For VMs, containers, etc it would be amazing

replies(5): >>40215927 #>>40216146 #>>40216291 #>>40217505 #>>40219940 #
3. airocker ◴[] No.40215927[source]
NixOS at it again :)
4. TZubiri ◴[] No.40216049[source]
access management is usually delegated to other systems that supervise UNIX, like AWS
replies(1): >>40216074 #
5. gnufx ◴[] No.40216052[source]
Visit the research computing environment sometime, for instance. The libzma SSH compromise was considered very worrying, after all.
replies(1): >>40216399 #
6. airocker ◴[] No.40216074[source]
Or Kubernetes. Thats where a standard way of authentication/authorization should be there.
7. adastra22 ◴[] No.40216146[source]
I’m not sure how “I don’t have to actually think about any of the separation” meshes with the fact that you explicitly setup multiple users and configured file and group permissions accordingly. You clearly put a lot of thought into it.

Alternatively, containers really are a no-thinking-required solution. Everything maximally isolated by default.

replies(3): >>40216833 #>>40216899 #>>40223828 #
8. mbivert ◴[] No.40216221[source]
I've never understood the need for sudo(1) on single-user, physical machines: I keep a root shell (su(1)) around for admin tasks, and it's always been sufficient.
replies(4): >>40216440 #>>40217134 #>>40217199 #>>40239486 #
9. tetris11 ◴[] No.40216291[source]
DietPi does exactly the same using Debian
10. richardwhiuk ◴[] No.40216399[source]
That didn't need multi-users.
replies(1): >>40229087 #
11. airocker ◴[] No.40216440[source]
Its just maybe easier way to not have to go to the root shell.
replies(1): >>40216629 #
12. anonymous_union ◴[] No.40216591[source]
in some other systems the concept has become overloaded. instead of multiple real people as users, different software with different permissions are different users. its not a bad abstraction.
replies(1): >>40216660 #
13. mbivert ◴[] No.40216629{3}[source]
Makes sense (I keep one warm in a tmux, two shortcuts away at most, so it never occurred to me).
14. airocker ◴[] No.40216660[source]
Maybe containers are a better way of isolating processes as mentioned in other comments.
replies(1): >>40216829 #
15. berkes ◴[] No.40216746[source]
I always still split up "sysadmin" from "deploy".

Ephemeral setups (amongst which k8s) remove that need but introduce a big load of other stuff.

Having a VPS that is managed by sysadmins (users with sudo rights, authed with keys) and on which partly overlapping "deploy" users can write to small parts and maybe do a passwordless "sudo sysctl restart fooapp" but only that, is a nice and simple setup.

I manage at least seven of these. And nothing in me even considers porting this to my k8s infra.

Edit: The reason for this setup is simple and twofold: deploy is safe and clear: deployers can be confident that whatever crap they pull, the server will churn on, data will be safe, recovery is possible. And all devs/ops having their own keys and accts gives a trail, logs and makes it very easy to remove that contractor after she did her work.

replies(2): >>40217436 #>>40219921 #
16. ◴[] No.40216794[source]
17. ◴[] No.40216829{3}[source]
18. oceanplexian ◴[] No.40216833{3}[source]
Containers are isolated but a far, far cry from maximally isolated. They’re still sharing a Linux Kernel with some hand waving and cgroups. The network isolation and QoS side is half-baked even in the most mature implementations.

HVM hypervisors were doing stronger, safer, better isolation than Docker was 10 years ago. They are certainly no-thinking required though which leads to the abysmal state of containerized security and performance we have currently.

19. mbreese ◴[] No.40216847[source]
> across unix multiuser environments getting used anymore for servers

I guess it depends on the servers. I'm in academic/research computing and single-user systems are the anomaly. Part of it is having access to beefier systems for smaller slices of time, but most of it is being able to share data and collaboration between users.

If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.

replies(1): >>40216876 #
20. shrimp_emoji ◴[] No.40216876[source]
> If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.

This is overwhelmingly the view for business and personal users. Settings like what you described are very rare nowadays.

No corporate IT department is timesharing users on a mainframe. It's just baremetal laptops or VMs on Windows with networked mountpoints.

replies(3): >>40217191 #>>40217763 #>>40220135 #
21. airocker ◴[] No.40216899{3}[source]
There have been no big cves of container escapes for a while now. I guess it can be considered secure enough.
replies(1): >>40220912 #
22. lupusreal ◴[] No.40217134[source]
One password is easier than two and it feels weird to use the same password for both accounts. About half of my sudo invocations are 'sudo su' lmao.
replies(4): >>40217721 #>>40218811 #>>40219340 #>>40220212 #
23. mbreese ◴[] No.40217191{3}[source]
Multi-user clusters are still quite common in HPC. And I think you're not going to see a switch away from multi-user systems anytime soon. Single user systems like laptops might be a good use-case, but even the laptop I'm using now has different accounts for me and my wife (and it's a Mac).

When you have one OS that is used on devices from phones, to laptops, to servers, to HPC clusters, you're going to have this friction. Could Linux operate in a single-user mode? Of course. But does that really make sense for the other use-cases?

replies(2): >>40217382 #>>40217636 #
24. chgs ◴[] No.40217199[source]
Everything I run with sudo is logged so I know how I messed up.

Nothing worse than ansible with its “sudo /tmp/whatever.sh” which hides what it’s doing.

replies(1): >>40220288 #
25. airocker ◴[] No.40217382{4}[source]
you could potentially create multiple containers in that machine which are single user and give to every user who needs access. CPU/Memory/GPU can be assigned in any way you want(shared/not shared). Now no user can mess up another user.
replies(3): >>40218780 #>>40220499 #>>40220652 #
26. NewJazz ◴[] No.40217413[source]
You only have one admin? How do you know who logged in, ssh certificates?
replies(2): >>40217574 #>>40217582 #
27. theteapot ◴[] No.40217436[source]
I think you mean systemctl.
replies(1): >>40218207 #
28. trueismywork ◴[] No.40217462[source]
You'll just end up implementing multiuser support anyway due to different permissions to different devices services
replies(1): >>40217522 #
29. inhumantsar ◴[] No.40217505[source]
> i can have one baremetal server with a dozen+ services, all with their own users and permissions

I've used nixos and I don't really see how nixos is special apart from the declarative config. The same can/should be done with any distro and any config manager.

And unless you were running Podman in rootless mode, the same setup applies to containers too.

replies(1): >>40219945 #
30. airocker ◴[] No.40217522[source]
How about only in servers where you only have CPU/Memory/disk/GPU with open source trusted drivers?
31. airocker ◴[] No.40217574[source]
Only one human per machine. If you need to share the machine, make multiple containers and give everyone a separate container.
replies(1): >>40218004 #
32. medellin ◴[] No.40217582[source]
Signed ssh certs make your life easy here
replies(1): >>40232652 #
33. whimsicalism ◴[] No.40217636{4}[source]
is it? most HPC (if GPU clusters count) are probably in industry and managed by containers
replies(2): >>40218964 #>>40219910 #
34. bmicraft ◴[] No.40217721{3}[source]
You could probably save a process with `sudo -i`
replies(1): >>40223090 #
35. twic ◴[] No.40217763{3}[source]
I wonder if they might be more common than you think. You will never see someone standing up at a conference and describing this setup, but there are millions of machines out there quietly doing work which are run by people who do not speak at conferences.

Where i work, we have a lot of physical machines. The IT staff own the root account, and development teams get some sort of normal user accounts, with highly restricted sudo.

36. NewJazz ◴[] No.40218004{3}[source]
You don't run any services where more than one person shares responsibility for managing that service? E.g. kubernetes. That is just one guy holding it up?
replies(1): >>40218279 #
37. 8372049 ◴[] No.40218207{3}[source]
He probably meant sysadmin as in the account with sudo access.
replies(1): >>40219349 #
38. airocker ◴[] No.40218279{4}[source]
In an on-prem cluster, yes one guy or a few sysadmins who either share passwords or can somehow put their keys in the authorized keys file and ssh.

In the cloud, AWS/GCP let or not let an IAM user reach a server.

replies(1): >>40232645 #
39. jongjong ◴[] No.40218411[source]
Technically not with virtual machines as the hardware is shared, though I agree, nowadays accounts and access control of the system belong to the virtualization layer below. The benefits of multiple accounts per machine are tiny and not worth the complexity for server setups.

We could significantly simplify things by getting rid of the account system. The same could be said for a lot of systems like database servers. Typically it's just one database, one user (your application server) with full access. The account system is mostly an annoyance.

For big company use cases where you want to reduce attack surface, why not spawn a second server with different credentials? Anyway big companies typically have many database servers in a cluster and the same credentials are shared by many server processes... The tendency there is literally in the opposite direction.

replies(1): >>40219152 #
40. SoftTalker ◴[] No.40218780{5}[source]
It's not "that machine" it's a cluster of dozens or hundreds of machines that is partitioned in various ways and runs batch jobs submitted via a queuing system (probably slurm).
41. MadnessASAP ◴[] No.40218811{3}[source]
You're entering your own accounts password, not root, when you use sudo. It's a security measure to prove our shell hasn't been hijacked and to make you pause and acknowledge your running a command that may affect the entire system.

You can also disable it in the sudoers file.

42. birdiesanders ◴[] No.40218964{5}[source]
Containers rely on many privilege separation systems to do what they do, they are in fact a rather extreme case of multi-user systems, but they tend to present as “single” user environs to the container’s processes.
replies(2): >>40225015 #>>40249510 #
43. zer00eyz ◴[] No.40219152[source]
>> Typically it's just one database, one user (your application server) with full access

This is a terrifying way to access databases.

Super user, A Modify user (just below super but cant delegate rights) for schema changes. A read/write app user... Probably a pile of read only users who, have audit trails... You might want some admins or analytics users (who have their own scheme additions).

The words security and audit trails all spring to mind.

replies(1): >>40223896 #
44. lanstin ◴[] No.40219340{3}[source]
of mine are sudo bash.
45. BenjiWiebe ◴[] No.40219349{4}[source]
s/sysctl/systemctl/
replies(1): >>40220304 #
46. unixhero ◴[] No.40219644[source]
The humans are now spawns of multithread shells and other things. Linux land is still very multiuser oriented. But it is the rise of the mschines instead.
47. bayindirh ◴[] No.40219888[source]
Many, many daemons run under their own users. Just because a single human is using the system, it doesn’t mean the system has a single user.

Also, people noted HPC, and other still very relevant scenarios.

48. bayindirh ◴[] No.40219910{5}[source]
HPC admin here.

Yes. First, we use user level container systems like apptainer/singularity, and these containers run under the user itself.

This is also same for non academic HPC systems.

From schedulers to accounting, everything is done at user level, and we have many, many users.

It won’t change anytime soon.

replies(2): >>40220850 #>>40227072 #
49. eru ◴[] No.40219921[source]
Yes, we are moving more and more towards a system of immutable deployments.

That's good! We don't patch executable binaries these days: we just compile a new one from source, when we made a change. Similarly, more and more we just build new systems (or their images) from source, instead of mucking around with existing systems.

50. eru ◴[] No.40219940[source]
Containerisation (either with containers or via VMs) doesn't have to be expensive.

In principle, you can have just exactly the binary (or binaries) you need in the container or VM, without having a full Linux install.

See eg Unikernels like Mirage.

51. rpgwaiter ◴[] No.40219945{3}[source]
Sure i could do this on debian, but like, i wont. Some software comes packaged with nice scripts to provision new users for running systemd services, but a lot do not.

For me and my home network, if the default security mode is “manage users yourself”, i chmod -R 777 on all applicable files and call it a day. Nixos lets me be lazy, as all nixos modules (that I’ve ever used) have their own user setups with minimal permissions by default

52. inopinatus ◴[] No.40220135{3}[source]
> No corporate IT department is timesharing users on a mainframe

Not a mainframe perhaps, but this sentiment is flat wrong otherwise, because that is how Citrix and RDS (fka Terminal Server) do app virtualization. It's an approach in widespread use both for enterprise mobile/remote access, and for thin clients in point of sale or booth applications. What's more, a *nix as the underlying infrastructure is far from unusual.

I have first-hand insider knowledge of two financial institutions that prefer this delivery model to manage the attack surface in retail settings, and a supermarket chain that prefers it because employee theft is seen as a problem. It’s also a model that is easy to describe and pitch to corporate CIOs, which is undoubtedly a merit in the eyes of many project managers.

One of the above financial institutions actually does still have an entire department of users logged in to an S/390 rented from IBM. They’ve been trying to discontinue the mainframe for years. I’m told there are similar continuing circumstances in airline reservations and credit card schemes; not just transaction processing, but connected interactive user sessions.

This is what corporate IT actually looks like. It is super different to the tech environments and white-collar head offices many of us think are the universal exemplar.

53. mbivert ◴[] No.40220212{3}[source]
> it feels weird to use the same password for both accounts

I'm not sure different passwords adds more protection for single-user machines, especially when sudo(1) can spawn root shells!

54. badgersnake ◴[] No.40220264[source]
I haven’t seen it doesn’t mean it doesn’t exist.
replies(1): >>40225984 #
55. mbivert ◴[] No.40220288{3}[source]
> Everything I run with sudo is logged so I know how I messed up.

FWIW, shells have a (configurable) history file. I'm not sure how it compares to sudo's logging though. I also personally perform little day to day admin tasks (I don't have as much time nor interest to toy around as I used to, and my current setup has been sufficient for about a decade).

> Nothing worse than ansible with its “sudo /tmp/whatever.sh” which hides what it’s doing.

That's a nightmare indeed; for sensitive and complex-enough tasks requiring a script, those scripts should at least be equipped with something as crude as a ``log() { printf ... | tail $logfile`` }.

56. berkes ◴[] No.40220304{5}[source]
Correct. Typed it on mobile.
57. wongarsu ◴[] No.40220499{5}[source]
Isn't that just reinventing multiuser operating systems? Normal Linux already has the property that no user can mess up any other user (unless they are root or have sudo rights)
replies(1): >>40232607 #
58. weebull ◴[] No.40220652{5}[source]
Not containers, but cgroups, and that is how HPC clusters work today. You still need multiple users though.
59. pankajkumar229 ◴[] No.40220850{6}[source]
there is no reason for users to be maintained in the kernel.
replies(1): >>40221822 #
60. lyu07282 ◴[] No.40220912{4}[source]
A lot of Kernel privescs are also technically container escapes, so 2 months ago was the last one actually: https://www.cvedetails.com/cve/CVE-2024-1086/
replies(1): >>40225864 #
61. imtringued ◴[] No.40221109[source]
You run everything as root or how am I supposed to understand that?

Sudo exists to execute commands with a different user. It's an abbreviation of "switch user (then) do" for a reason.

Most daemons run under a specific user. Things like docker that use a root Daemon are a security nightmare.

replies(1): >>40225955 #
62. bayindirh ◴[] No.40221822{7}[source]
Can you elaborate on that?
63. blablabla123 ◴[] No.40223012[source]
Yeah but elevated permissions may be needed from time to time anyway. Either on the client, the baremetal server or the container. Running everything as root is even for containers not recommended. Considering how popular these have become, it's a bit of an irony that systemd isn't available on the container without considerable detours.
replies(1): >>40226007 #
64. lupusreal ◴[] No.40223090{4}[source]
Slightly less convienent to type.
65. matrss ◴[] No.40223828{3}[source]
> I’m not sure how “I don’t have to actually think about any of the separation” meshes with the fact that you explicitly setup multiple users and configured file and group permissions accordingly. You clearly put a lot of thought into it.

That's the thing, with NixOS you usually don't have to explicitly setup users and permissions. For most simple services, the entire setup is a single line of code in your NixOS configuration. E.g.

    services.uptime-kuma.enable = true;
will make sure that your system is running an uptime-kuma instance, with its own user and all.

Some more complex software might require more configuration, but most of the time user and group setup is not part of that.

66. jongjong ◴[] No.40223896{3}[source]
A simpler solution is to simply not give direct access to the database to anyone who doesn't own a large stake in the project. Expose it via a more restrictive CRUD interface with access control in the application layer.
67. whimsicalism ◴[] No.40225015{6}[source]
> they are in fact a rather extreme case of multi-user systems

Are they? My understanding was that by default, the `dockerd` (or whatever) is root and then all containers map to the same non-privileged user.

68. hobobaggins ◴[] No.40225619[source]
We use Userify which manages multiple user logins (via SSH) and sudo usage.. there are definitely many, many use cases for teams logging into remote servers, and most security frameworks (PCI-DSS, HIPAA, NIST, ISO 27000) require separate credentials for separate humans. Sudo has some issues, but it works very well and is well understood by many different tools.
replies(1): >>40226128 #
69. airocker ◴[] No.40225864{5}[source]
but then even traditional multi-user would be compromised in this case.
70. airocker ◴[] No.40225955[source]
You dont need to use docker. Containerd or just just direct cgroup manipulation: https://access.redhat.com/documentation/en-us/red_hat_enterp...
71. airocker ◴[] No.40225984[source]
Exist does not mean it should keep existing if it is unnecessary complexity from the past.
72. airocker ◴[] No.40226007[source]
One user with sudo for sysadmins on baremetal and a sudo access without CAP_SYSADMIN on container should be good.
replies(1): >>40234393 #
73. airocker ◴[] No.40226128[source]
It could all be simplified and map one to one to your identity provider credentials at a higher level. Having a complicated user system on the servers makes it a problem.
replies(1): >>40226837 #
74. superq ◴[] No.40226837{3}[source]
Userify doesn't seem complicated.. it is just Linux users, created with adduser just like you'd type in at the command line: https://github.com/userify/shim/blob/master/shim.py#L227
replies(1): >>40232630 #
75. whimsicalism ◴[] No.40227072{6}[source]
I thought most containers shared the same user, ie. `dockremap` in the case of docker.

I understand academia has lots of different accounts.

replies(1): >>40227561 #
76. bayindirh ◴[] No.40227561{7}[source]
Nope, full usermode containers (e.g.: apptainer) run under the user's own context, and furthermore under a cgroup (if we're talking HPC/SLURM at least) which restricts the user's resources to what they requested in their job file.

Hence all containers are isolated from each other, not only at process level, but at user + cgroup level too.

Apptainer: https://apptainer.org

replies(1): >>40249497 #
77. gnufx ◴[] No.40229087{3}[source]
No, but that's the case I've overwhelmingly seen over the decades. Anyway, are you going to redesign ssh not to require a user, for instance? I assume you wouldn't want sshd running as the putative single user.

[I'm all for replacing notions of privileges/permissions with capabilities.]

replies(1): >>40236596 #
78. airocker ◴[] No.40232607{6}[source]
no it is not
79. superq ◴[] No.40232630{4}[source]
Seems it uses useradd not adduser
80. superq ◴[] No.40232645{5}[source]
That's convenient but doesn't scale and really not too great for security for a bunch of reasons, but it can work great for smaller teams and minimize friction.
81. superq ◴[] No.40232652{3}[source]
Maintaining your own PKI isn't exactly easy unless it's your full time job.
replies(1): >>40241997 #
82. blablabla123 ◴[] No.40234393{3}[source]
I like seeing qmail as blueprint how a secure app that needs elevates permissions should be designed, in fact it has 7 users.
83. richardwhiuk ◴[] No.40236596{4}[source]
Yes, i'd rather that the sshd daemon ran with a restricted set of capabilities.
84. anon291 ◴[] No.40239486[source]
Scripting.
85. medellin ◴[] No.40241997{4}[source]
Its fairly easy to get setup and after done correctly pretty low maintenance. But i have done it a few times at this point.
86. airocker ◴[] No.40249497{8}[source]
I think a admin would better understand the system if there was only one subsystem doing a particular type of security and not two. Two subsystems doing security would lead to more problems down the road.
replies(1): >>40284651 #
87. airocker ◴[] No.40249510{6}[source]
Good software hides complexity. User does not have to understand user group permissions suid etc etc
88. mbreese ◴[] No.40284651{9}[source]
For HPC, there are two different contexts where users need to be considered - interactive use and batch job processing. Users login to a cluster, write their scripts, work with files, etc. This is your typical user account stuff. But they also submit jobs here.

Second, there are the jobs users submit. These are often executed on separate nodes and the usage is managed. Here you have both user and cgroup limits in place. The cgroups make sure that the jobs on have the required resources. The user authentication makes sure that the job can read/write data as the user. This was the user can work with their data on the interactive nodes.

So the two different systems have different rationales, and both are needed. It all depends on the context.

replies(1): >>40314244 #
89. airocker ◴[] No.40314244{10}[source]
If we forget how the current system is architected, we are looking at two problems: First problem is that Linux capabilities are also dealing with isolating processes so they have limited capabilities because the user based isolation is not enough. Second problem is that local identity has no relation to the cloud identity which is undesirable. If we remove user based authentication and rely on capabilities only with identity served by cloud or kubernetes, it could be a simpler way to do authenticating and authorization
replies(1): >>40314600 #
90. mbreese ◴[] No.40314600{11}[source]
I'm not sure I even follow...

The primary point of user-authentication is that we need to be able to read/write data and programs. So you have to have a user-level authentication mechanism someplace to be able to read and write data. cgroups are used primarily for restricting resources, so those two sets of restrictions are largely orthogonal to each other.

Second, user-authentication is almost always backed (at least on interactive nodes) by an LDAP or some other networked mechanism, so I'm not sure what "cloud" or "k8s" really adds here.

If you're trying to say that we should just run HPC jobs in the cloud, that's an option. It's not necessarily a great option from a long-term budget perspective, but it's an option.