If the attacker wants to use AI to assist in looking for valuables on your machine, they won't install AI on your machine, they'll use the remote shell software to pop a shell session, and ask AI they're running on one of their machines to look around in the shell for anything sensitive.
If an attacker has access to your unlocked computer, it is already game over, and LLM tools is quite far down the list of dangerous software they could install.
Maybe we should ban common RAT software first, like `ssh` and `TeamViewer`.
I guess that's on me for being oblivious enough that it took this obvious of a comment for me to be sure you're intentionally trolling. Nice work.
Actually they’ll just the AI you already have on your machine[0]
In this attack, the malware would use Claude Code (with your credentials) to scan your own machine.
Much easier than running the inference themselves!
[0]https://semgrep.dev/blog/2025/security-alert-nx-compromised-...