Most active commenters
  • viccis(7)
  • cybergreg(6)
  • spogbiper(5)
  • ctoth(5)
  • isatsam(4)
  • fckgw(3)
  • cbisnett(3)

←back to thread

154 points mellosouls | 69 comments | | HN request time: 0.82s | source | bottom
1. isatsam ◴[] No.45184197[source]
I don't work in cybersecurity and, after looking at the site's homepage, couldn't exactly figure out from all the buzzwords what exactly is this product. The most concerning takeaway from this article for me is that the maintainers of Huntress (whatever it is) can keep a log of, as well as personally access, the users' browser history, history of launched executables, device's hostname, and presumably a lot of other information. How is this product not a total security nightmare?
replies(12): >>45184282 #>>45184376 #>>45184533 #>>45184902 #>>45185067 #>>45185111 #>>45185367 #>>45185677 #>>45185868 #>>45185950 #>>45186020 #>>45190165 #
2. skulk ◴[] No.45184282[source]
It looks like Huntress is a "install this on your computer and we'll watch over your systems and keep you safe, for sure."

I also find it kind of funny that the "blunder" mentioned in the title, according to the article is ... installing Huntress's agent. Do they look at every customer's google searches to see if they're suspicious too?

replies(5): >>45184436 #>>45185141 #>>45185166 #>>45185701 #>>45194025 #
3. d4mi3n ◴[] No.45184376[source]
It's definitely not a product for an individual user. Controls like this are useful in certain arenas where you need total visibility of corporate devices. As with any highly privileged tool or service, compromise of it can be a big problem. That said, the goal with tools like this is to usually lock down and keep a close eye on company issued laptops and the like so you know when one gets stolen, hit by some malware, or somebody does things with it they aren't allowed to be doing (e.g. exfiltrating corp data, watching porn at work, running unauthorized executable, connecting to problematic networks, etc.).

As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.

replies(1): >>45185452 #
4. neffy ◴[] No.45184436[source]
It´s also a lot of assumptions. This probably is an attacker - or wannabe at least. But you could be a student or researcher working on a cyber security course looking and for some projects your search flow would look a lot like this.
replies(1): >>45185202 #
5. esseph ◴[] No.45184533[source]
Huntress is a security company.

One of the tools they make is a Endpoint Detection and Response (EDR) product.

The kind of thing that goes on every laptop, server, and workstation in certain controlled environments (banks, government, etc.).

6. politelemon ◴[] No.45184902[source]
If you work in any mid to large enterprise, there is a tool like this installed on your laptop.

It was put there by your security team.

7. jonstewart ◴[] No.45185067[source]
Their customers are companies. Almost every company, of at least a certain size, has one or more security tools installed on every host in the organization; there are called Endpoint Detection & Response (EDR) tools. Some marquee products are SentinelOne and CrowdStrike Falcon, but there are dozens. Huntress makes their own security tool but operates it for their customers as a service, which is called Managed Detection & Response (MDR). Everything on this page is legit.
8. xp84 ◴[] No.45185111[source]
I was also frustrated by this. I got about 25% of the way in and was annoyed that they still did such a poor job of communicating what their product is. An advertorial like this can often save the "And that's why Our Product is so great, it can protect you from attacks like these!" for the end, but here, where the article is about how merely installing their product gives Huntress the company full access to everything you do, it leaves me with more questions than answers.

As a corporate IT tool, I can see how Huntress ought to allow my IT department or my manager or my corporate counsel access to my browser history and everything I do, but I'm even still foggy on why Huntress grants themselves that level of access automatically.

Sure, a peek into what the bad guys do is neat, and the actual person here doesn't deserve privacy for his crimes, but I'd love a much clearer explanation of why they were able to do this to him and how if I were an IT manager choosing to deploy this software, someone who works at Huntress wouldn't be able to just pull up one of my employee's browser history or do any other investigating of their computers.

replies(1): >>45185248 #
9. pizzalife ◴[] No.45185141[source]
Indeed, this article makes them look bad. Seems completely tone deaf to release this as a puff piece about the product.
replies(1): >>45185322 #
10. mrbluecoat ◴[] No.45185166[source]
I found that creepy too. Apparently `blunder == installing their software`
replies(2): >>45185294 #>>45187951 #
11. viccis ◴[] No.45185202{3}[source]
They mention in the write up that they correlated certain indicators with what they had seen in other attacks to be reasonably sure they knew this was an active attacker.

The problem to me is that this is the kind of thing you'd expect to see being done by a state intelligence organization with explicitly defined authorities to carry out surveillance of foreign attackers codified in law somewhere. For a private company to carry out a massive surveillance campaign against a target based on their own determination of the target's identity and to then publish all of that is much more legally questionable to me. It's already often ethically and legally murky enough when the state does it; for a private company to do it seems like they're operating well beyond their legal authority. I'd imagine (or hope I guess) that they have a lawyer who they consulted before this campaign as well as before this publication.

Either way, not a great advertisement for your EDR service to show everyone that you're shoulder surfing your customers' employees and potentially posting all that to the internet if you decide they're doing something wrong.

replies(1): >>45185325 #
12. viccis ◴[] No.45185248[source]
Their product is advertised as "Managed EDR". That usually means they employ a SOC that will review alerts and then triage and orchestrate responses accordingly. The use case here is when your IT manage chooses to deploy this and give them full visibility into your assets because your company wants to effectively outsource security response.

It's a relatively common model, with MDR and MSSP providers doing similar things. I don't see it as much with EDR providers though.

13. fckgw ◴[] No.45185294{3}[source]
A threat actor installing software specifically designed to log and monitor attacks from threat actors would be considered a blunder, no?
14. cbisnett ◴[] No.45185322{3}[source]
Actually we just thought it was interesting that an attacker installed our EDR agent on the machine they use to attack their victims. That’s really bad operational security and we were able to learn a lot from that access.
replies(1): >>45186178 #
15. fckgw ◴[] No.45185325{4}[source]
> The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent.

The machine was already known to the company as belonging to a threat actor from previous activity

replies(2): >>45187645 #>>45187680 #
16. cbisnett ◴[] No.45185367[source]
Thanks for the feedback on not understanding what we sell from the homepage. We sell an Endpoint Detection and Response (EDR) product that we manage with our 24/7 SOC. To perform the investigations on potentially malicious activity, we can fetch files from the endpoint and review them. We log all of this activity and make it available to our customers. We are an extension of their security team, which means they trust us with this access. We’ve been doing this for more than 10 years and have built up a pretty good reputation, but I can see how that would freak some folks out. We also sell to businesses, so this is something that would be installed on a work computer.
replies(3): >>45185521 #>>45185882 #>>45187740 #
17. isatsam ◴[] No.45185452[source]
This makes sense, but in this case, isn't the company behind Huntress having direct access to this data still a problem? For example, if the government purchased Outlook licenses, I'd assume DoD can read clerks' emails, but Microsoft employees can't. I imagine worst case compromising a lot of Huntress' users is just a question of compromising of its developers, like one of the people in the authors section of this article.
replies(4): >>45186517 #>>45186821 #>>45190699 #>>45194684 #
18. isatsam ◴[] No.45185521[source]
How was an individual user (in this article's case, a phishing sites developer) able to install your software and seemingly not notice the level of access they gave you to their computer?
replies(2): >>45185683 #>>45186865 #
19. mc32 ◴[] No.45185677[source]
Those things are what MTR/MDR solutions do. They track where you go and what processes are running and spawn other processes, etc. it allows tenants to see how an exploit progresses or stops, etc. these systems can also do web filtering for the tenant as well as keep logs as to what sessions get established and so on. That’s how these products work.
20. cbisnett ◴[] No.45185683{3}[source]
Windows doesn’t have application permissions like Mac, iOS, and Android. An app doesn’t specify what it need to be able to do, it inherits the permissions of the user that launched it. Not a great permissions model, but it’s legacy all the way back to the earliest versions of Windows.
replies(1): >>45185869 #
21. tgv ◴[] No.45185701[source]
It's stated in the article: "The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent."

However, it's obvious that protection-ware like this is essentially spyware with alerts. My company uses a similar service, and it includes a remote desktop tool, which I immediately blocked from auto-startup. But the whatever scanner sends things to some central service. All in the name of security.

replies(2): >>45186618 #>>45187201 #
22. deedubaya ◴[] No.45185868[source]
It pains me how this comment illustrates how ignorant most folks are of the consequences of installing software off the internet is (even technically inclined folks that hang out on HN). How many of us have non-security software installed on our computers today that do exactly these things... but sell the information? Definitely a non-zero number!

If folks understood this better, there would be less reason for software like Huntress' EDR to exist.

replies(2): >>45186307 #>>45192293 #
23. isatsam ◴[] No.45185869{4}[source]
This is a surprising response - I was expecting something like "they clicked past an alert notifying that they were giving us this level of access". Just because Windows only has a generic password prompt whenever an app wants to do something dangerous, doesn't mean you can't inform the user via your app's own UI. Others like AnyDesk do exactly that.
replies(2): >>45186274 #>>45186345 #
24. poemxo ◴[] No.45185882[source]
Is it clear to users that their system is monitored and that they have consented to screengrabbing? Unless those screenshots were merely simulated from the Chrome history.
replies(1): >>45186155 #
25. spogbiper ◴[] No.45185950[source]
If you work for a company that's bigger than a mom and pop, chances are very good that your IT department has this same level of access to any computer used in the organization. Huntress is basically an outsourced portion of the IT department for smaller companies that don't have their own 24/7 security team. It's a pretty common thing, with many vendors offering this type of service. Your work computer may have a similar product/service installed
replies(1): >>45186251 #
26. dboreham ◴[] No.45186020[source]
> couldn't exactly figure out from all the buzzwords what exactly is this product

I suspect this is deliberate.

27. spogbiper ◴[] No.45186155{3}[source]
This would generally be covered in your corporate acceptable use policy or employee handbook, where ever your employer describes what is allowable on corporate devices and what is monitored when you use them. Some companies also display a notification when you log in along the lines of "This is an XYZ Corp system, all activity is logged and monitored for malicious behavior"

in general, if you're using a company owned device (the target for this product and many others like it) you should always assume everything is logged

replies(2): >>45186647 #>>45186666 #
28. ctoth ◴[] No.45186178{4}[source]
What is weird to me is that you have access to this information at all? It would make sense for the people who use your software ... the IT departments or whatever to have access but why on earth do your engineers need access? What gates access to your customers' machines? What triggers a write-up like this? Hostnames, "machine names" are ... not unique by nature.
replies(1): >>45186646 #
29. ctoth ◴[] No.45186251[source]
This makes total sense.. Except who is the SMB in this case? It sounds like the person just downloaded this off the Internet, it wasn't pre-installed by IT. So it sounds like Huntress has full and complete access to whoever downloads their software to try it out/demo it... and aren't afraid to use this access for their own purposes/just do a bit of poking around because why not? When a hostname matches?
replies(2): >>45186408 #>>45187809 #
30. cybergreg ◴[] No.45186274{5}[source]
You’re really missing the point here. Huntress is an MDR, a cybersecurity company. They protect the endpoint by monitoring it for malicious activity and responding in kind. It’s what they do, not unlike Crowdstrike, Microsoft, etc. Generally a threat actor will install a security agent like this to find a bypass in order to attack more victims. They know exactly what they’re doing.
replies(1): >>45192830 #
31. ctoth ◴[] No.45186307[source]
I don't think anyone is unfamiliar with the consequences of installing potential malware. I think people are surprised that a seemingly? legit company is going off and having a little pokeabout on arbitrary computers based on nothing more than a hostname match. Then sharing screenshots on HN. I guess they're Canadian but wow does this seem to have CFAA written all over it?
replies(2): >>45186558 #>>45192250 #
32. spogbiper ◴[] No.45186345{5}[source]
this product is typically silently mass deployed to all systems within an organization, completely unknown to the individual users. afaik there is no user interface or way to interact with the software from the computer, its all managed in a central web console
33. spogbiper ◴[] No.45186408{3}[source]
yeah i don't know about the legality or morality of what huntress did here. i just know these types of products/this level of access are very common
34. evanjrowley ◴[] No.45186517{3}[source]
Many businesses outsource their SOC to third parties like Huntress, Carbon Black, SentinelOne, all of whom offer very fancy Endpoint Detection and Respone (EDR) tools. Just about every EDR solution is a Cloud/SaaS offering provided either directly or indirectly through a third party Managed Service Provider (MSP). We call this Managed Detection and Respone (MDR). From technical and privacy standpoints, it probably sounds like a huge risk, but it's also worth acknowledging that EDR companies operate immense threat intelligence platforms through real-time monitoring of customers. From a C-suite perspective, it makes a lot of sense to offload the specializations of real-time protection and malware analysis to EDR solutions. There are risk managers who have quantified the risk tolerance for these types of products/arrangements. The company legal department, the CFO, and the board of directors are all satisfied with the EDR solutions placement on the Gartner quadrant and SOC Type 3 report saying the EDR provider follows best practices. Sometimes it's even a requirement for "cyber insurance" which a business may need depending on the industry. For better or for worse, EDR is how most institutions secure their IT infrastructure today.
replies(1): >>45190710 #
35. deedubaya ◴[] No.45186558{3}[source]
Where can I find that Potential Malware Inside™ sticker that warns everyone else who is familiar with the consequences? Asking for a friend!
replies(1): >>45186607 #
36. ctoth ◴[] No.45186607{4}[source]
https://www.etsy.com/market/malware_stickers or https://www.zazzle.com/alert_malware_detected_avoid_clicking...
37. boston_clone ◴[] No.45186618{3}[source]
Directly impinging the enterprise-approved security tooling is really not a good idea, no matter your own personal opinions on their functionality.

Unless maybe you just want to develop a more personal relationship with your internal cybersecurity team, who knows.

replies(1): >>45194693 #
38. cybergreg ◴[] No.45186646{5}[source]
Huntress is a cybersecurity company. They’re specifically hired for this purpose, to protect the company and its assets.

As far as unique identifiers go, advertisers use a unique fingerprint of your browser to target you individually. Cookies, JavaScript, screen size, etc, are all used.

replies(2): >>45187357 #>>45189950 #
39. hyperman1 ◴[] No.45186647{4}[source]
Is this true outside the USA?

In the EU, employees have an expectation of privacy even on their corporate laptop. It is common for e.g. union workers to use corporate email to communicate, and the employer is not allowed to breach privacy here. Even chatter between worker is reasonably private by default.

I suspect, if the attacker is inside the EU, this article is technically a blatant breach of the GDPR. Not that the attacker will sue you for it, but customers might find this discomforting.

replies(2): >>45187234 #>>45187789 #
40. cybergreg ◴[] No.45186666{4}[source]
In the US, on a corporate owned device there is no expectation of privacy.
41. d4mi3n ◴[] No.45186821{3}[source]
Oh, absolutely. There are some ways to avoid this--customer managed encyrption keys, for example--but there will always be some kind of trade-off. The less an EDR (endpoint detection & response) tool can see, the less useful it is. Going with a customer managed encryption approach means the customer is then on the hook for watching and alerting on suspicious activity. Some orgs have the capacity and expertise to do this. Many do not. It often comes down to deciding if you have a budget to do this yourself to a level you and an auditor/customer is comfortable with (and proving it) or outsourcing to a known and trusted expert.

EDIT: For additional context, I'd add that security/risk tradeoffs happen all the time. In practice trusting Huntress isn't too different than trusting NPM with an engineer that has root access to their machine or any kind of centralized IT provisioning/patching setup.

42. pcthrowaway ◴[] No.45186865{3}[source]
Poor english skills if I had to guess; the article mentions they had to translate things, and they didn't read the ToS.
43. coppsilgold ◴[] No.45187201{3}[source]
I would assume any machine not owned by me is fully compromised and there is no recovery possible. And treat it accordingly, such as using it just for the purpose the owner of the machine dictates assuming I value that relationship.

The startup script you blocked could have just been a decoy. And set off a red flag.

A lot of these EDR's operate in kernel space.

44. spogbiper ◴[] No.45187234{5}[source]
It's an interesting question. Services like Huntress (there are many similar) only work by looking at what is happening on the computer. To some degree they are automated but there is a human review element to all of them where ultimately some person A will be looking at what some other person B did on the system. Not publishing it in a blog like this, but definitely violating the privacy of the valid user and/or a bad guy to some degree
45. ctoth ◴[] No.45187357{6}[source]
The article states that the "attacker" downloaded the software via a Google ad, not deployed by their corporate IT.

I'm also slightly curious as to if you might be associated with an EDR vendor? I notice that you only have three comments ever, and they all seem to be defending how EDR software and Huntress works without engaging with this specific instance.

replies(3): >>45188059 #>>45191259 #>>45193557 #
46. bornfreddy ◴[] No.45187645{5}[source]
That's not very convincing. They still abused trust placed in them - by an active attacker, granted, but still... This seems like a legally risky move and it doesn't inspire trust in Huntress.
replies(1): >>45188579 #
47. viccis ◴[] No.45187680{5}[source]
That is what I said, yes.
48. viccis ◴[] No.45187740[source]
>We are an extension of their security team, which means they trust us with this access

So if <bad actor> in this writeup read your pitch and decided to install your agent to secure their attack machine, it sounds like they "trusted you with this access". You used that access to surveil them, decide that you didn't approve of their illegal activity, and publish it to the internet.

Why should any company "trust you with this access"? If one of your customers is doing what looks to one of your analysts to be cooking their books, do you surveil all of that activity and then make a blog post about them? "Hey everyone here, it's Huntress showing how <company> made the blunder of giving us access to their systems, so we did a little surprise finance audit of them!"

49. viccis ◴[] No.45187789{5}[source]
I can't imagine pen testers would be able to work in the EU without being able to access individual workstations without the users' knowledge.

The key difference here is that pen testing, as well as IT testing, is very explicitly scoped out in a legal contract, and part of that is that users have to told to consent to monitoring for relevant business purposes.

What happened in this blogpost is still outside of that scope, obviously. I doubt that Huntress could make the claim that their customer here was clearly told that they would be possibly monitoring their activity in the same way that a "Content to Monitoring" popup for every login on corporate machines does it.

50. viccis ◴[] No.45187809{3}[source]
Reminds me of when a Hostgator employee told me on reddit that he liked digging through peoples' websites and chatted with me about the stuff I had hosted on my website.
51. moffkalast ◴[] No.45187951{3}[source]
Well they aren't wrong. Crowdstrike showed how much of a blunder it can become.
52. moffkalast ◴[] No.45188059{7}[source]
Yeah they're in full damage control after realizing how out of touch they are when not talking to corporate suits for once.
53. fckgw ◴[] No.45188579{6}[source]
Who's trust? Their job is to hunt down and research threat actors. The information gained from this is used to better protect their enterprise customers.

This gains more trust with their customers and breaking trust with ... threat actors?

replies(1): >>45188879 #
54. viccis ◴[] No.45188879{7}[source]
>Who's trust? Their job is to hunt down and research threat actors

No, their job is to provide EDR protection for their customers.

replies(1): >>45191211 #
55. ◴[] No.45189950{6}[source]
56. ◴[] No.45190165[source]
57. rcxdude ◴[] No.45190699{3}[source]
You would think so, but in general the kind of attitude to security that results in these kinds of products actively encourages increasing the number of entities that have very highly privileged access to your system. 'Supply chain attacks' and 'attack surface' don't really register in this area, but 'buy this and you will be more secure' sales pitches very much do, especially with a dose of FOMO from 'industry standard' rhetoric.
58. rcxdude ◴[] No.45190710{4}[source]
For worse, I would say. This kind of thing is about accountability shuffling and not at all about improving security.
replies(2): >>45191157 #>>45191219 #
59. cybergreg ◴[] No.45191157{5}[source]
Huh? Small and medium sized businesses have how much to spend on security? Let alone IT?
60. cybergreg ◴[] No.45191211{8}[source]
Threat intelligence is a thing.in fact there’s entire companies that sell just that. In fact, there’s entire government organizations that do just that.
61. NegativeK ◴[] No.45191219{5}[source]
I'm concerned that you're not familiar with EDR and organizations who flat out can't build a full 24/7 SOC. Which is the vast majority of businesses.

EDR is a rootkit based on the idea that malware hashes are useless, and security needs to get complete insight into systems after a compromise. You can't root out an attacker with persistence without software that's as invasive as the malware can get.

And a managed SOC is shifting accountability to an extent because they are often _far_ cheaper than the staff it takes to have a 24/7 SOC. That's assuming you have the talent to build a SOC instead of paying for a failed SOC build. Also, don't forget that you need backup staff for sick leave and vacation. And you'll have to be constantly hiring due to SOC burnout.

If all of this sounds like expensive band-aids instead of dealing with the underlying infection, it is. It's complex solutions to deal with complex attackers going after incredibly complex systems. But I haven't really heard of security solutions that reduce complexity and solve the deep underlying problems.

Not even theoretical solutions.

Other than "unplug it all".

62. cybergreg ◴[] No.45191259{7}[source]
Again, threat actors are well aware of what they’re downloading. FWIW I’m an offsec specialist. I spend a lot of time bypassing EDR. Im just shocked at how little this crowd is aware of OpSec and threat intel. I’ll crawl back into my Reddit hole
63. couchridr ◴[] No.45192250{3}[source]
Probably the huntress user agreed to hundreds of conditions when he clicked “I agree to the terms of service.”
64. VladVladikoff ◴[] No.45192830{6}[source]
>They know exactly what they’re doing.

Strongly disagree. If they installed this to do some analysis they would have done that in a VM if they “knew exactly what they were doing”.

Either you snared a script kiddy, or your software download and install process that followed that google ads click was highly questionable.

replies(1): >>45194637 #
65. FreakLegion ◴[] No.45193557{7}[source]
If you just want a different source, I can vouch for what cybergreg is saying.

Cybersecurity companies aren't passive data collectors like, say, Dropbox. They actively hunt for attacks in the data. To be clear, this goes way beyond MDR or EDR. The email security companies are hunting in your email, the network security companies are hunting in your network logs, so on. When they find things, they pick up the phone, and sometimes save you from wiring a million dollars to a bad guy or whatever.

The customer likes this very much, even if individual employees don't.

66. beefnugs ◴[] No.45194025[source]
Well lets be real, you dont decide one day "today is the day we read one users entire history" and BLAMMO its a hacker! Lets keep reading!
67. galaxy_gas ◴[] No.45194637{7}[source]
I think it´s obvious from the browser history in the blog posting that script kiddy is for sure
68. jacquesm ◴[] No.45194684{3}[source]
> For example, if the government purchased Outlook licenses, I'd assume DoD can read clerks' emails, but Microsoft employees can't.

Funny, my automatic assumption when using any US based service or US provided software is that at a minimum the NSA is reading over my shoulder, and that I have no idea who else is able to do that, but that number is likely > 0. If there is anything that I took away from the Snowden releases then it was that even the most paranoid of us weren't nearly paranoid enough.

69. jacquesm ◴[] No.45194693{4}[source]
Or with the HR team and the corporate security guys assisting your departure from the building holding a small cardboard box.