←back to thread

466 points CoolCold | 3 comments | | HN request time: 1.072s | source
Show context
airocker ◴[] No.40215819[source]
I have seldom come across unix multiuser environments getting used anymore for servers. Its generally just one user on one physical machine now a days. I understand run0's promise is still useful but i would really like to see the whole unix permission system simplified for just one user who has sudo access.
replies(17): >>40215898 #>>40216049 #>>40216052 #>>40216221 #>>40216591 #>>40216746 #>>40216794 #>>40216847 #>>40217413 #>>40217462 #>>40218411 #>>40219644 #>>40219888 #>>40220264 #>>40221109 #>>40223012 #>>40225619 #
mbreese ◴[] No.40216847[source]
> across unix multiuser environments getting used anymore for servers

I guess it depends on the servers. I'm in academic/research computing and single-user systems are the anomaly. Part of it is having access to beefier systems for smaller slices of time, but most of it is being able to share data and collaboration between users.

If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.

replies(1): >>40216876 #
shrimp_emoji ◴[] No.40216876[source]
> If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.

This is overwhelmingly the view for business and personal users. Settings like what you described are very rare nowadays.

No corporate IT department is timesharing users on a mainframe. It's just baremetal laptops or VMs on Windows with networked mountpoints.

replies(3): >>40217191 #>>40217763 #>>40220135 #
mbreese ◴[] No.40217191[source]
Multi-user clusters are still quite common in HPC. And I think you're not going to see a switch away from multi-user systems anytime soon. Single user systems like laptops might be a good use-case, but even the laptop I'm using now has different accounts for me and my wife (and it's a Mac).

When you have one OS that is used on devices from phones, to laptops, to servers, to HPC clusters, you're going to have this friction. Could Linux operate in a single-user mode? Of course. But does that really make sense for the other use-cases?

replies(2): >>40217382 #>>40217636 #
whimsicalism ◴[] No.40217636[source]
is it? most HPC (if GPU clusters count) are probably in industry and managed by containers
replies(2): >>40218964 #>>40219910 #
bayindirh ◴[] No.40219910[source]
HPC admin here.

Yes. First, we use user level container systems like apptainer/singularity, and these containers run under the user itself.

This is also same for non academic HPC systems.

From schedulers to accounting, everything is done at user level, and we have many, many users.

It won’t change anytime soon.

replies(2): >>40220850 #>>40227072 #
whimsicalism ◴[] No.40227072[source]
I thought most containers shared the same user, ie. `dockremap` in the case of docker.

I understand academia has lots of different accounts.

replies(1): >>40227561 #
bayindirh ◴[] No.40227561[source]
Nope, full usermode containers (e.g.: apptainer) run under the user's own context, and furthermore under a cgroup (if we're talking HPC/SLURM at least) which restricts the user's resources to what they requested in their job file.

Hence all containers are isolated from each other, not only at process level, but at user + cgroup level too.

Apptainer: https://apptainer.org

replies(1): >>40249497 #
airocker ◴[] No.40249497[source]
I think a admin would better understand the system if there was only one subsystem doing a particular type of security and not two. Two subsystems doing security would lead to more problems down the road.
replies(1): >>40284651 #
1. mbreese ◴[] No.40284651[source]
For HPC, there are two different contexts where users need to be considered - interactive use and batch job processing. Users login to a cluster, write their scripts, work with files, etc. This is your typical user account stuff. But they also submit jobs here.

Second, there are the jobs users submit. These are often executed on separate nodes and the usage is managed. Here you have both user and cgroup limits in place. The cgroups make sure that the jobs on have the required resources. The user authentication makes sure that the job can read/write data as the user. This was the user can work with their data on the interactive nodes.

So the two different systems have different rationales, and both are needed. It all depends on the context.

replies(1): >>40314244 #
2. airocker ◴[] No.40314244[source]
If we forget how the current system is architected, we are looking at two problems: First problem is that Linux capabilities are also dealing with isolating processes so they have limited capabilities because the user based isolation is not enough. Second problem is that local identity has no relation to the cloud identity which is undesirable. If we remove user based authentication and rely on capabilities only with identity served by cloud or kubernetes, it could be a simpler way to do authenticating and authorization
replies(1): >>40314600 #
3. mbreese ◴[] No.40314600[source]
I'm not sure I even follow...

The primary point of user-authentication is that we need to be able to read/write data and programs. So you have to have a user-level authentication mechanism someplace to be able to read and write data. cgroups are used primarily for restricting resources, so those two sets of restrictions are largely orthogonal to each other.

Second, user-authentication is almost always backed (at least on interactive nodes) by an LDAP or some other networked mechanism, so I'm not sure what "cloud" or "k8s" really adds here.

If you're trying to say that we should just run HPC jobs in the cloud, that's an option. It's not necessarily a great option from a long-term budget perspective, but it's an option.