←back to thread

597 points doener | 4 comments | | HN request time: 0s | source
Show context
mapontosevenths ◴[] No.46181864[source]
Its been a very long time since I was a Sysadmin, but I'm curious what managing a fleet of Linux desktops is like today? Has it vastly improved?

When I last tried in a small pilot program, it was incredibly primitive. Linux desktops were janky and manual compared to Active Directory and group policy, and an alternative to Intune/AAD didn't even seem to exist. Heck, even things like WSUS and WDS didnt seem to have an open version or only had versions that required expensive expert level SME'S to perform constant fiddling. Meanwhile the Windows tools could be managed by 20 year old admins with basic certitifcations.

Also, GRC and security seemed to be impossible back then. There was an utter lack of decent DLP tools, proper legal hold was difficult, EDR/AV solutions were primitive and the options were limited, etc.

Back then it was like nobody who had ever actually been a sysadmin had ever taken an honest crack at Linux and all the hype was coming from home users who had no idea what herding boxen was actually like.

replies(5): >>46181979 #>>46182272 #>>46182348 #>>46183765 #>>46186223 #
1718627440 ◴[] No.46182348[source]
I think this comes primarily from trying to add a separate management tool on top, instead of leveraging the OS structure themself. There is a reason, why most directories are specified to be readonly. Also writable XOR persistent is mostly true. The only things required to be writable are /tmp, /var and /home. /tmp is wiped at least on every boot or is even just a ramdisk. /var can be cached or reset to the predefined settings on boot. /home needs to be managed, that is true. But you wouldn't want every users directory on every host anyway, instead you want to populate them on login. That is typically done by libpam.

/usr is expected to be shared among hosts, host-specific stuff goes into /usr/local for a reason, and as a sysadmin you can decide to simply not have host specific software.

EDR/AV is basically unnecessary, when you only mount things either writable or executable. And you don't want the users to start random software or mount random USB-sticks anyway.

> Back then it was like nobody who had ever actually been a sysadmin had ever taken an honest crack at Linux and all the hype was coming from home users who had no idea what herding boxen was actually like.

Unix has over 50 years of history of being primarily managed by sysadmins instead of home users. While Linux is not Unix, it has inherited a lot. The whole system is basically designed to run a bunch of admin configured software and is actually less suitable for home users. I would say the primary problem was accessing it with a Windows mindset.

replies(4): >>46182491 #>>46182560 #>>46184305 #>>46184825 #
mapontosevenths ◴[] No.46182560[source]
> the primary problem was accessing it with a Windows mindset.

The early Unix systems you're talking about were mainframe based. Modern client-server or p2p apps need an entirely different mindset and a different set of tools that Linux just didnt have the last time I looked.

When they audit the company for SOX , PCI-DSS, etc we can't just shrug and say "Nah, we decided we don't need that stuff." That's actually a good thing though, because if it were optional well meaning folks like you just wouldn't bother and the company would wind up on the evening news.

replies(1): >>46182971 #
1. 1718627440 ◴[] No.46182971{3}[source]
> When they audit the company for SOX, PCI-DSS,

Maybe I am missing something, but that seems orthogonal to ensuring host integrity? I didn't argue against logging access and making things auditable, by all means do that. I argued against working against the OS.

It is not like integrity protection software doesn't exist for Linux (e.g. Tripwire), it is just different from Windows, since on Windows you have a system where the default way is to let the user control the software and install random things, and you need to patch that ability away first. On Linux software installation is typically controlled by the admin and done with a single file database (which makes it less suitable for home users), but this is exactly what you want on a admin controlled system.

Sure, computing paradigms have changed, but it is still a good idea to use OS isolation like not running programs with user rights.

replies(2): >>46183057 #>>46189466 #
2. mapontosevenths ◴[] No.46183057[source]
I just mean to say that while you absolutely should work to configure the OS to a reasonable baseline of security, you also still need a real EDR product on top of it.

Even if security were "solved" in Linux (it's not), it would still often be illegal not to have an EDR and that's probably a good thing.

replies(1): >>46183211 #
3. 1718627440 ◴[] No.46183211[source]
> you also still need a real EDR product on top of it.

Well that's my point. You don't need third-party software messing up with the OS internals, when the same thing can be provided by the OS directly. The real EDR product is the OS.

4. mmooss ◴[] No.46189466[source]
> on Windows you have a system where the default way is to let the user control the software and install random things, and you need to patch that ability away first.

That's certainly not the default in a managed corporate environment. Even for home users, Microsoft restricts what you can install more and more.

And restrictions are not implemented via patch, but via management capabilities native to the OS, accessed via checkboxes in Group Policy.