As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.
EDIT: For additional context, I'd add that security/risk tradeoffs happen all the time. In practice trusting Huntress isn't too different than trusting NPM with an engineer that has root access to their machine or any kind of centralized IT provisioning/patching setup.
EDR is a rootkit based on the idea that malware hashes are useless, and security needs to get complete insight into systems after a compromise. You can't root out an attacker with persistence without software that's as invasive as the malware can get.
And a managed SOC is shifting accountability to an extent because they are often _far_ cheaper than the staff it takes to have a 24/7 SOC. That's assuming you have the talent to build a SOC instead of paying for a failed SOC build. Also, don't forget that you need backup staff for sick leave and vacation. And you'll have to be constantly hiring due to SOC burnout.
If all of this sounds like expensive band-aids instead of dealing with the underlying infection, it is. It's complex solutions to deal with complex attackers going after incredibly complex systems. But I haven't really heard of security solutions that reduce complexity and solve the deep underlying problems.
Not even theoretical solutions.
Other than "unplug it all".
Funny, my automatic assumption when using any US based service or US provided software is that at a minimum the NSA is reading over my shoulder, and that I have no idea who else is able to do that, but that number is likely > 0. If there is anything that I took away from the Snowden releases then it was that even the most paranoid of us weren't nearly paranoid enough.