I also find it kind of funny that the "blunder" mentioned in the title, according to the article is ... installing Huntress's agent. Do they look at every customer's google searches to see if they're suspicious too?
As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.
One of the tools they make is a Endpoint Detection and Response (EDR) product.
The kind of thing that goes on every laptop, server, and workstation in certain controlled environments (banks, government, etc.).
It was put there by your security team.
As a corporate IT tool, I can see how Huntress ought to allow my IT department or my manager or my corporate counsel access to my browser history and everything I do, but I'm even still foggy on why Huntress grants themselves that level of access automatically.
Sure, a peek into what the bad guys do is neat, and the actual person here doesn't deserve privacy for his crimes, but I'd love a much clearer explanation of why they were able to do this to him and how if I were an IT manager choosing to deploy this software, someone who works at Huntress wouldn't be able to just pull up one of my employee's browser history or do any other investigating of their computers.
The problem to me is that this is the kind of thing you'd expect to see being done by a state intelligence organization with explicitly defined authorities to carry out surveillance of foreign attackers codified in law somewhere. For a private company to carry out a massive surveillance campaign against a target based on their own determination of the target's identity and to then publish all of that is much more legally questionable to me. It's already often ethically and legally murky enough when the state does it; for a private company to do it seems like they're operating well beyond their legal authority. I'd imagine (or hope I guess) that they have a lawyer who they consulted before this campaign as well as before this publication.
Either way, not a great advertisement for your EDR service to show everyone that you're shoulder surfing your customers' employees and potentially posting all that to the internet if you decide they're doing something wrong.
It's a relatively common model, with MDR and MSSP providers doing similar things. I don't see it as much with EDR providers though.
The machine was already known to the company as belonging to a threat actor from previous activity
However, it's obvious that protection-ware like this is essentially spyware with alerts. My company uses a similar service, and it includes a remote desktop tool, which I immediately blocked from auto-startup. But the whatever scanner sends things to some central service. All in the name of security.
If folks understood this better, there would be less reason for software like Huntress' EDR to exist.
in general, if you're using a company owned device (the target for this product and many others like it) you should always assume everything is logged
Unless maybe you just want to develop a more personal relationship with your internal cybersecurity team, who knows.
As far as unique identifiers go, advertisers use a unique fingerprint of your browser to target you individually. Cookies, JavaScript, screen size, etc, are all used.
In the EU, employees have an expectation of privacy even on their corporate laptop. It is common for e.g. union workers to use corporate email to communicate, and the employer is not allowed to breach privacy here. Even chatter between worker is reasonably private by default.
I suspect, if the attacker is inside the EU, this article is technically a blatant breach of the GDPR. Not that the attacker will sue you for it, but customers might find this discomforting.
EDIT: For additional context, I'd add that security/risk tradeoffs happen all the time. In practice trusting Huntress isn't too different than trusting NPM with an engineer that has root access to their machine or any kind of centralized IT provisioning/patching setup.
The startup script you blocked could have just been a decoy. And set off a red flag.
A lot of these EDR's operate in kernel space.
I'm also slightly curious as to if you might be associated with an EDR vendor? I notice that you only have three comments ever, and they all seem to be defending how EDR software and Huntress works without engaging with this specific instance.
So if <bad actor> in this writeup read your pitch and decided to install your agent to secure their attack machine, it sounds like they "trusted you with this access". You used that access to surveil them, decide that you didn't approve of their illegal activity, and publish it to the internet.
Why should any company "trust you with this access"? If one of your customers is doing what looks to one of your analysts to be cooking their books, do you surveil all of that activity and then make a blog post about them? "Hey everyone here, it's Huntress showing how <company> made the blunder of giving us access to their systems, so we did a little surprise finance audit of them!"
The key difference here is that pen testing, as well as IT testing, is very explicitly scoped out in a legal contract, and part of that is that users have to told to consent to monitoring for relevant business purposes.
What happened in this blogpost is still outside of that scope, obviously. I doubt that Huntress could make the claim that their customer here was clearly told that they would be possibly monitoring their activity in the same way that a "Content to Monitoring" popup for every login on corporate machines does it.
This gains more trust with their customers and breaking trust with ... threat actors?
EDR is a rootkit based on the idea that malware hashes are useless, and security needs to get complete insight into systems after a compromise. You can't root out an attacker with persistence without software that's as invasive as the malware can get.
And a managed SOC is shifting accountability to an extent because they are often _far_ cheaper than the staff it takes to have a 24/7 SOC. That's assuming you have the talent to build a SOC instead of paying for a failed SOC build. Also, don't forget that you need backup staff for sick leave and vacation. And you'll have to be constantly hiring due to SOC burnout.
If all of this sounds like expensive band-aids instead of dealing with the underlying infection, it is. It's complex solutions to deal with complex attackers going after incredibly complex systems. But I haven't really heard of security solutions that reduce complexity and solve the deep underlying problems.
Not even theoretical solutions.
Other than "unplug it all".
Strongly disagree. If they installed this to do some analysis they would have done that in a VM if they “knew exactly what they were doing”.
Either you snared a script kiddy, or your software download and install process that followed that google ads click was highly questionable.
Cybersecurity companies aren't passive data collectors like, say, Dropbox. They actively hunt for attacks in the data. To be clear, this goes way beyond MDR or EDR. The email security companies are hunting in your email, the network security companies are hunting in your network logs, so on. When they find things, they pick up the phone, and sometimes save you from wiring a million dollars to a bad guy or whatever.
The customer likes this very much, even if individual employees don't.
Funny, my automatic assumption when using any US based service or US provided software is that at a minimum the NSA is reading over my shoulder, and that I have no idea who else is able to do that, but that number is likely > 0. If there is anything that I took away from the Snowden releases then it was that even the most paranoid of us weren't nearly paranoid enough.