Wont work by default if I'm reading this correctly
If the attacker wants to use AI to assist in looking for valuables on your machine, they won't install AI on your machine, they'll use the remote shell software to pop a shell session, and ask AI they're running on one of their machines to look around in the shell for anything sensitive.
If an attacker has access to your unlocked computer, it is already game over, and LLM tools is quite far down the list of dangerous software they could install.
Maybe we should ban common RAT software first, like `ssh` and `TeamViewer`.
I guess that's on me for being oblivious enough that it took this obvious of a comment for me to be sure you're intentionally trolling. Nice work.
Actually they’ll just the AI you already have on your machine[0]
In this attack, the malware would use Claude Code (with your credentials) to scan your own machine.
Much easier than running the inference themselves!
[0]https://semgrep.dev/blog/2025/security-alert-nx-compromised-...
you can use Claude via bedrock and benefit from AWS trust
Gemini? Google owns your e-mail. Maybe you're even one of those weirdos who doesn't use Google for e-mail - I bet your recipient does.
so... they have your code, your secrets, etc.
For most corporate code (that is highly confidential) you still have proper internet access, but you sure as hell can't just send your code to all AI providers just because you want to, just because it's built into your IDE.
Don't be naive.