←back to thread

The Dangers of Microsoft Pluton

(gabrielsieben.tech)
733 points gjsman-1000 | 1 comments | | HN request time: 0.411s | source
Show context
badrabbit ◴[] No.32235948[source]
HVCI is truly revolutionary, you can no longer just dump lsass and get credentials if it is enabled among other use cases.

But to me, this all looks like MS building a house of cards again. If I am writing a rootkit or other malware why can I not use this to make sure only the compromised devices secure processor can read the contents of memory or does defender get a pass?! A defender/analyst won't also be able to dump ram with volatility or a custom driver to analyze the malware/implant? No microsoft solution would prevent a user from downloading and running an executable entirely so malicious code would run, but can it now hide from security solutions? What part of HVCI am I missing?

As far as the rest of it, it will break legitimate use cases for users so I don't expect it to be a default anytime soon. I hate the remote attestation stuff but my hope is it will either fizzle out or regulations will be put in place for enabling user control of the secure computing private key for personally owned devices because code you can't introspect or keys you can't manage should not exist on a device you own (not license).

replies(1): >>32240517 #
Harvesterify ◴[] No.32240517[source]
For now (and I haven't seen an annoucement of a coming change about it), only trustlets signed by Microsoft can be executed in the VSM (Virtual Secure Mode), so you won't be able to write a malware or a rootkit that leverages it to hide the execution flow.
replies(1): >>32244068 #
1. badrabbit ◴[] No.32244068[source]
Thanks for clarifying. With drivers they get around that by using vulnerable drivers, but this isn't regular kernel mode code execution, and MS will probably revoke certs for future vulnerable trustlets? (Or not, since that can cause outages). Sounds like a whole new area of research.