If the creators read this, I suggest some ways of building trust. There’s no “about us”, no GitHub link, etc. It’s a random webpage that wants my personal details, and sends me a “exe”. The overlap of people who understand what this tool does, and people who would run that “exe” is pretty small.
But perhaps I'm wrong
I guess if this gets enough attention, malware will just add more sophisticated checks and not just look at the exe name.
But on that note, I wondered the same thing at my last workplace where we'd only run windows in virtual machines. Sometimes these were quite outdated regarding system and browser updates, and some non-tech staff used them to browse random websites. They were never hit by any crypto malware and whatnot, which surprised me a lot at first, but at some point I realized the first thing you do as even a halfway decent malware author is checking whether you run in a virtualized environment.
Here is one on github:
But more sophisticated detection means bigger payload (making the malware easier to detect) and more complexity (making the malware harder to make / maintain), so mission accomplished.
Try running this on a Windows PC with Windows Defender off & just Scarecrow running. You could use the MaleX test kit [1] or a set of malware such as the Zoo collection [2] or something more current. I'd be very interested to see how many malware executables stop half way through their installation after seeing a few bogus registry entries/background programs running. I'm not trying to imply it's worthless, but it needs some actual "real world" test results.
[1] https://github.com/Mayachitra-Inc/MaleX [2] https://github.com/ytisf/theZoo
I have just added a bit of info about us on the website. I'm not sure what else we can do really. Its a trust thing, same with any software and AV vendors.
Game anti-cheat code makes similar checks (arguably it is malware, but that's besides the point). So, running this might put you at risk of getting banned from your favourite game.
Hilarious aside, it would only work if you don't actually use multiple keyboard -- otherwise an additional one would make switching between multiple keyboards very annoying [*].
It also mentions some other changes like adding RU keywords to your registry. Again, these measures would have many side effects since lots of software actually use these registry entries for legit reasons. So I don't know if this Cyber Scarecrow product would have this problem, since it does modify registry, too.
1: https://krebsonsecurity.com/2021/05/try-this-one-weird-trick...
*: A little rant: as someone who use three virtual keyboards (English, Chinese, Japanese), it is already a pain in ass to switch them since MS does not follow "last used" switching order (like alt+tab). Instead, it just switches in one direction.
I suppose it could work like Sysinternals Process Explorer/Autoruns/etc & submit running hashes to Virustotal.com or other databases, but there's always the likelihood of false positives with that.
If you search Github for "malware samples" There are loads of them. Vx Underground also has a large collection [1]. So, I would go through there & look for commonalities to try and find what malware often tries to trigger on startup.
I'll just end with this example of an interesting form of a trip wire I've seen in use on Windows PCs: ZoneAlarm makes an anti-ransomwear tool I can't think of the name of. It placed hidden files & folders in every directory on the hard drive. It would then monitor if anything tried to access it - as ransomwear would attempt to encrypt it - and force kill all running programs in an attempt to shut down the malware before it could encrypt the entire HDD.
Also make it OSS and ask for donations. Not sure what your feature earning model is but is seems easy to replicate and as point out several times right now it asked to blindly thrust you
Actually, I much prefer this order. Depending on what keyboard I currently use, I know exactly how often to switch instead of having to remember what I used previously. In fact, I don't even like this order when Alt+Tab'ing, it makes switching between more than two windows pretty inconsistent (yes, I know Windows+Number works, too).
Furthermore:
1. The Shift+Alt chord is obnoxiously unreliable, sensitive to which key comes down first, or something.
2. Japanese is always comeing up in A mode even though you last had it in あ mode.
3. Bad performance: sllllow language switching at times: you hit some keyboard sequence for changing languages or modes within a language, and nothing happens. This interacts with (2): did we hit an unreliable chord? Or is it just slow to respond?
There are ways to establish trust, you aren’t doing any of them.
Downloading a random exe from a noname site/author to scare malware sounds like another crazy security recipe from your layman tech friend who installs registry cleaners and toggles random settings for “speed up”.
if you have malware probing your processes to decide if it can run or not you have a very serious problem regardless of whether it decides to run or not, there is an entrance to your systems you don't know about.
For the cat to care about the mouse it needs to at least be a good appetizer.
The authors will want the malware to spread as far and wide as it can on e.g. a corporate network. So they need to make a risk assessment; if the malware stays on the current computer, is the risk of detection (over time, as the AV software gets updates) higher than the opportunity to use this host for nefarious purposes later?
The list[1] of processes simulated by cyber scarecrow are mostly related to being in a virtual machine though. Utilities like procmon/regmon might indicate the system is being used by a techie. I guess the malware author's assumption is that these machines will be better managed and monitored than the desktop/laptop systems used by office workers.
Malware authors add in this feature so that it’s harder for researchers to figure out how it works. They want to make reverse engineering their code more difficult.
I agree with everything else you said.
This software seems to fake some idiciators that are used by malware to detect wheter they're on a "real system" or a honeypot.
When someone is offering you a certificate and the only thing you have to do in order to get it is pay them a significant amount of money, that's a major red flag that it's either a scam or you're being extorted. Or both. In any case you should not pay them and neither should anyone else.
A paranoid online game like e.g. Test Drive Unlimited, might not launch because the OS says it's Windows Server 2008 (ask me how I know). A script in a Word document might not deliver its payload if there are no "recently opened documents".
The idea with this thing is to make the environment look suspicious by making it look like an environment where the malware is being deliberately executed in order to study its behaviour.
Not many software promises to fend off attackers, asks for an email address before download, and creates a bunch of processes using a closed source dll the existence of which can easily be checked.
Then again, not many malware targeting consumers at random check for security software. You are more likely to see a malware stop working if you fake the amount of ram and cpu and your network driver vendor than if you have CrowdStrike, etc. running.
https://krebsonsecurity.com/2021/05/try-this-one-weird-trick...
Unfortunately (at least outside of HN) "people who understand what this tool does" probably isn't a subset of "people who would run that "exe"."
>They don't want to get caught and avoid computers that have security analysis or anti-malware tools on them.
Malware doesn't want to run in a sandbox environment (or in general when observed), because doing malicious things in the AV sandbox is a straight way to get blocked, and leaks C2 servers and other IoCs immediately. That's why most malware families[1] at least try to check if the machine they're running on is a sandbox/researcher pc/virtual machine.
I assume this is what this tool does. We joke at work that the easiest thing to do to make your windows immune to malware is to create a fake service and call it VBoxSVC.
[1] except, usually, ransomware, because ransomware is very straightforward and doesn't care about stealth anyway.
Though I often see this implemented by calling GetKeyboardLayout, so this will only work if you actually use the Russian (or neighbourly) layout when malware detonation happens.
In my head, I'm also wondering why a botnet wouldn't just want to take over such a machine because they know for sure that it's a scarecrow. But security by obscurity is no way to instill trust here
As a side note, I’ve been trying to figure out how to get an EV code signing cert that isn’t tied to me (want to make a tool Microsoft won’t like and don’t want retaliation to hurt my business) but I haven’t come up with a way to do it - which is a good thing I suppose.
A low-effort activity that makes you not be the low-hanging fruit can often be worth it. For example, back in the '90s I moved my SSH port from 22 to ... not telling you! It's pretty easy to scan for SSH servers on alternate ports, but basically none of the worms do that.
Facing reality: anti-malware tooling is inadequate -- so inadequate, I haven't found a reason to purchase it for the one Windows machine I still have. People say "Defender works well enough, now!" and I think that's a pretty adequate way of describing it in that anti-malware has an impossible job and that is evident by every vendor's failure to succeed at it. So why pay for it?
It's always a cat-and-mouse game. This is an interesting approach, though, because it could shift the balance a little bit. Anti-malware's biggest problem is successfully identifying a threat while minimally interfering with the performance of an application. A mess of techniques are used to optimize this but when a file has to be scanned, it's expensive. It'd be interesting to see if it'd be possible to eliminate some variants of malware from on-demand scanning "if this tool defeats the malware as effectively", pushing scanning for those variants to an asynchronous process that allows the executable to run while it is being scanned.
I can see a lot of the problems with this kind of optimization[1]: it turns a "layer in the onion" into a replacement for an existing function which has more unknowns as far as attacks are concerned. Creating the environmental components required to "trick the malware" may be more expensive than just scanning. White-list scenarios may not be possible: I suspect anti-cheat services and potentially legitimate commercial software might be affected, as well[2] ... getting them to white-list a tool like this won't be easy unless the installed base is substantial. I suspect that "hiding the artifacts this tool creates to trick malware" from a white-listed processes might be impossible.
For at least a brief moment, this might be a useful tool in preventing infections from unknown threats. Brief, because -- by the author's own admissions (FAQ) -- it will devolve into a cat-and-mouse game if the tool is popular enough. There's another cat-and-mouse game, though. If this technique isn't resource intensive while offering protection somewhere in line with what it would take to implement, all of the anti-virus vendors will implement it -- including Microsoft. And they will be seen by customers as far better equipped to play "cat" or at least "the choice you won't get fired over."
And that's where it makes a whole lot of sense to open-source the product. It's a clever idea with a lot of unknowns and a very low likelihood of being a business. Unless it's being integrated into a larger security suite (same business challenges, but you have something of "a full product" as far as your customers are concerned), it's only value (outside of purely altruistic ones) would be either "popping the tool on the author's related business's website" to bring people to a related business/service or as a way to promote the author's skill set (for consulting/resume reasons). I'm not arrogant enough to say there's no way to make money from it, I just can't see it -- at least, not one that would make enough money to offset the cost of the "cat and mouse" game.
[0] Which, yeah, "I wouldn't run it on my computer" but I give the authors enough of the benefit of the doubt that "it's new"
[1] Not the least of which being that I do not author AV software so I have nothing to tell me that any of my assumptions about on-demand scanning are correct.
[2] It used to be a common practice to make reverse engineering more difficult.
Would not recommend installing. It's someone's hobby project that runs as administrator.
Having "last used" order makes quickly switch between two windows very easy, which is something I personally use more. It's easier than pressing alt+tab/shift+alt+tab alternately.
To switch to the third window, you can use alt+tab+tab.
It is sad to hear that. In my view DRM = malware.
I'd be interested to see this tested, there's tons of good malware repos out there like vx-underground's collections that can be used to test it.
If you dont wanna share the source, somewhat logical. Perhaps run a test versus gigabytes of malware samples and let us know which ones actually query these process names / values you create and disable themselves as a result??
But, my experience with the antivirus was horrible. When i first opened the app there were popus everywhere advertising for their other products, and the overall ui didn’t look trustworthy.
I am no security expert, so I’m asking: is this the best way to deal with malware?
https://ccadb.my.salesforce-sites.com/microsoft/IncludedCACe...
Also, in ANY modern Chinese IME (Microsoft or 3rd party), switching between English/中文 mode is simply pressing shift once. You still have to use alt+` for that in JP IME, which I find unbearable.
For example, I believe the anti-cheat software used by games like Fortnite looks for similar things -- my understanding is that it, too, will refuse to start when it is executing in a VM[0]. As a teenager (90s), I remember several applications/games refusing to start when I'd attached a tracing process to them. They did this to stop exactly what I was doing: trying to figure out how to defeat the software licensing code. I haven't had a need to do that since the turn of the century but I'd put $10 on that still being a thing.
So you end up with a "false positive", and like anti-virus software, it results in "denial of service." But does anti-virus's solution of "white list it" apply here? At least with their specific implementation, it's "on or off", but I wonder if it's even possible to alter the application in a way that could "white list a process so it doesn't see the 'malware defeat tricks' this exposes." If not, you'd just have to "turn off protection" when you were using that program. That might not be practical depending on the program. It's also not likely the vendor of that program will care that "an application which pretends it's doing things we don't like" breaks their application unless it represents a lot of their install base.
[0] I looked into it a few years ago b/c I run Tumbleweed and it's a game the kids enjoy (I'm not a huge fan but my gaming days have been behind me for a while, now) ... I had hoped to be able to expose my GPU to the VM enough to be able to play it but didn't bother trying after reading others' experiences.
Simply, if users are as intelligent as you think, they’re too intelligent to use your product.
Not an expert myself, but I think cleaning up and reinstalling your whole OS once in a while probably deals with malware.
Always the same bullshit with you people here. Could never possibly someone built a sub-optimal system -- it HAD to be management fucking with our good intentions!
But even more simply, just setting your SSH port to something >10000 is enough to get away with a very mediocre password. It's mostly really not about being a hard target, not being the easiest one is likely quite sufficient :)
But this literally comes off as probably being malware itself.
If your going to ship something like this, it needs to be open source preferably with a GitHub pipeline so I can see the full build process.
You also run into the elephant repellent problem. The best defense to malware will always be regular backups and a willingness to wipe your computer if things go wrong.
I used Google to search for "list of microsoft trusted CA".
The problem is "the fake components" would have to be prevented from being detected by legitimate software and the only way I can think to do that would be to execute everything in a sandbox that is capable of: (a) hiding some contained running processes (the fake ones) from the rest of the OS while (b) while allowing the process that "sees the fake stuff" to be seen by everything else "like any old process."
Applying ACLs (and restricting white-listed processes) might work in some cases; might equally just be seen as a permissions problem and result in a nonsensical error (because the developers never imagined someone would change the permissions on an obvious key), or it might be that the "trick" employed is "Adding a Russian Keyboard" which can be very disruptive to the user "if they use more than one input language" or "is one of those places where a program may read from there never expecting to encounter an error."
A lot of this seems like it would require use of containerization -- docker/docker-like -- for Windows apps. I'm familiar with a few offerings here and there, but I've worked with none of them and I run Linux more than Windows these days. So my questions really boil down to:
Where's Windows containerization at? Would it be possible to run an application in a docker or docker-like container with a Windows kernel which can have its environment controlled in a manner that is more transparent to the application running within the container? Is there any other approach which would allow for "non-white-listed applications" to run containerized and "see the Scarecrow artifacts", while allowing the white-listed applications[1] to run outside of the container in a manner that hides some of the processes within the container. Can it do all of that in a manner that would work if the same "check" were repeated immediately after confirming an Elevation dialog[2]? from the white-listed application in a manner that couldn't be defeated by repeating the same "check" after presenting an elevation dialog?
Again, that's assuming "this is a brilliant idea" -- and there's some evidence that as a concept, at least, it would help (ignoring this particular implementation of the idea), but it still suffers from its success, so the extent that it helps/is adopted equates to how long any of these techniques aren't defeated. And just from the sense I get of the complexities required to "implement this in a manner that legitimate won't fail, too", I suspect it will be easier to defeat a tool like this than it will be to protect against its defeat. In other words, the attacker is a healthy young cat chasing a tired old mouse.
[0] Anti-cheat being the most obvious, but those are often indistinguishable from malware. I'd encountered plenty of games/apps in the 90s that refused to run when I ran software to trace aspects of their memory interaction. I had some weird accounting app that somehow figured out when my own code (well, code I mostly borrowed from other implementations) was used for the same purpose.
[1] The assumption being that "a legitimate application which does these kinds of checks" is also likely to refuse to run within a container unless it's impossible to detect the container as reliably as everything else (and vendors are completely tolerant of false positives if the affected customers don't represent enough in terms of profit, or the solution is "don't run that unusual security software when you run ours").
[2] I've seen it enough with Easy Anti-cheat that I just click "Yes" like a drone. There was at least one occasion when it popped up after I had installed some developer tooling but not had a game update come down between launches. Because it was a huge install, it may just have been that the game detectedI have no idea why this happens -- on a few occasions, I had no update applied between loads but had installed other software so it could have been "to fix something that software broke" but it could also have been "to re-evaluate the environment as an administrator because something changed enough on the system to warrant a re-check that it is still compliant with the rules"
https://krebsonsecurity.com/2021/05/try-this-one-weird-trick...
There is also a difference when using commercial stuff such as vmware instead of qemu or virtualbox as open source is more suitable to be tailored to the specific thing, in this case, cheating.
In the end, this approach works well for slowing done malware as there is less risk for normal software to allow working inside of vm in contrast to malware that should be coded to be extra paranoid in order to avoid as many tar pits as possible.
Not for hacking non citizens
Doesn't exist. Not even UAC is a reliable security boundary. Likely, it will never exist.
> Is there any other approach which would allow for "non-white-listed applications" to run containerized and "see the Scarecrow artifacts",
Sounds a bit like WoW64. It should be easy enough to replicate this behaviour with a rootkit. However, the software would always be able to peek behind the curtain.
> In other words, the attacker is a healthy young cat chasing a tired old mouse.
I always thought of the attackers as the mice, and anti-malware folk as the cats.
Also are you aware of the (very awesome) EDR evasion toolkit called scarecrow? Naming stuff is hard, I get that, but this collision is a bit much IMO.
"Hey you better buy my elephant repellant so you don't get attacked!"
'Okay.'
...
"So were you attacked?"
'No, I live in San Francisco and there are no wild elephants."
"Well, I guess the repellant is working!"
We need a chatGPT version of LMGTFY...
* code signing certificate funding
* consulting/assessment to harden the application or concept itself as well as to make it more robust (they'll probably route through Cure53)
* consulting/engineering to solve for the "malware detects this executable and decides that the other indicators can be ignored" problem, or consulting more generally on how to do this in a way that's more resilient.
If you wanted to fund this in some way without necessarily doing the typical founder slog, might make sense to 501c3 in the US and then get funded by or license this to security tooling manufacturers so that it can be embedded into security tools, or to research the model with funding from across the security industry so that the allergic reaction by malware groups to security tooling can be exploited more systemically.
I imagine the final state of this effort might be that security companies could be willing to license decoy versions of their toolkits to everyone that are bitwise identical to actual running versions but then activate production functionality with the right key.
Here's a caveat though: Attackers will at some point notice scarecrows and simply work around them. Now suuure, if you have a better lock than your neighbours, that decreases your chances of getting broken into, but in the end this is a classic "security by obscurity" measure. So if your time and computer/data is valuable, I would rather invest in other security measures (firewall, awareness training, backups etc.)
- providing fake manifests to hardware drivers commonly associated with virtual machines - active process inspector handles - presence of any software signed by hexrays (the ini file is usually enough)
The major upside is the pricing: currently "free" [3] during testing, later about 10 USD/month. As there doesn't seem to be a revocation mechanism based on some docs I read, signed binaries might be valid even after a canceled subscription.
[1] https://azure.microsoft.com/en-us/products/trusted-signing
[2] https://learn.microsoft.com/en-us/azure/trusted-signing/quic...
[3] You need a CC and they will likely charge you at some point. Also I had to use some kind of business Azure/MS 365 account which costs about 5 USD/month. Not sure about the exact lingo, not an Azure/MS expert. The docs in [2] was enough for me to get through the process.
All code signing promises to give you the name of a real person or company that signed the binary. From there it's the end user's responsibility to decide if they trust that entity.
In practice the threat of the justice system makes any signed executable unlikely to be malicious. But that doesn't mean you have to uncritically trust a binary signed by Joe Hobo
This would be a boon for security folk who analyze/reverse malware: they can add/simulate this tool in their VMs to ensure the malware being analyzed doesn't deactivate itself!
Malware uses signals to determine if they are running in a VM. If we can degrade those signals, they will have to play a cat and mouse game trying to avoid VMs.
The less clear it is if a process is running in a VM, the easier time security researchers will have testing exploits found in the wild.
I kinda think this functionality could be subverted into a kill switch for legit-licensed installs simply by altering the key.
- commenting under a pseudonymous profile
- asking for emails by saying "please email me. contact at cyberscarecrow.com"
- describing yourself in your FAQ entry for "Who are you?" by writing "We are cyber security researchers, living in the UK. We built cyber scarecrow to run on our own computers and decided to share it for others to use it too."
I frequently use pseudonymous profiles for various things but they are NOT a good way to establish trust.
Given how easy and free tools like Wireguard are to setup now (thanks Tailscale!), I really don't understand why folks feel the need to map SSH access to a publicly exposed port at all anymore for the most part, even for throw away side projects.
If these were laypeople that would then give up, sure.
But I'm surprised that it's even worth malware authors' time to put in these checks. I can't imagine there's even a single case of where it stopped malware researchers in the end. What, so it takes the researchers a few hours or a couple of days longer? Why would malware authors even bother?
(What I can understand is malware that will spread through as many types of systems as possible, but only "activate" the bad behavior on a specific type of system. But that's totally different -- a whitelist related to its intended purpose, not a blacklist to avoid security researchers.)
If you start down this path you will end up in mindgame hell.
What threats are those? Where are all the people going to jail for falsely signed software? The stuxnet authors seem to be in the wind.
Of course people stealing other people's signing keys is an issue. But EV code signing certificates are pretty well protected (requiring either a hardware dongle or 2FA). It's not impossible for a highly sophisticated attacker, but it's a pretty high bar.
Similarly, there have also been malware that will deactivate itself when it detects signs of the computer being Russian; Russia doesn't really care about Russian hackers attacking foreign countries (but they'll crack down on malware spreading within Russia, when detected) so for Russian malware authors (and malware authors pretending to be Russian) it's a good idea not to spread to Russian computers. This has the funny side effect of simply adding a Russian keyboard layout being enough to prevent infection from some specific strains of malware.
This is less common among the "download trustedsteam.exe to update your whatsapp today" malware and random attack scripts and more likely to happen in targeted attacks at specific targets.
This tactic probably won't do anything against the kind of malware that's in pirated games and drive-by downloads (which is probably what most infections are) as I don't think the VM evasion tactics are necessary for those. It may help protect against the kind of malware human rights activists and journalists can face, though. I don't know if I'd trust this particular piece of software to do it, but it'll work in theory. I'm sure malware authors will update their code to detect this software if this approach ever takes off.
I would pay a small amount for a scarecrow version of AV software if a) it had zero footprint on my system resources, and b) it really did scare away malware that checks for such things.
Either way, though, it makes malware more onerous to develop since it has to bundle in public keys in order to verify running processes are correctly signed.
It's obviously an arms race when it comes to malware, but this could be a significant step forward on the defensive side, forcing malware developers to evolve their TTPs.
Lots of people on HN could easily spin up their own fake processes if they knew the names?
People, this is malware. Please don’t fall for it.
I don’t think it’s wise to leave this on the front page. I hope dang agrees and takes it down.
That said, EV certs jumped in price over the past couple years. The total cost ends up being higher than the list price -- vendors tack on a non-trivial extra fee for the USB hardware token and shipping. All-inclusive I paid like $450 a year ago, and that was after getting a small repeat-customer discount.
So yes, Azure's service is substantially cheaper than an EV cert. And it also has the flexibility of being a monthly plan, rather than an annual commitment.
Fundamentally, it makes no sense to expose low level server access mechanisms to anyone other than yourself/team - there is no need for this to sit listening on a public port, almost ever.