This should be a SEV0 at Google and Anthropic and they need to be all-hands in monitoring this and communicating this to the public.
Their communications should be immediate and fully transparent.
> What's novel about using LLMs for this work is the ability to offload much of the fingerprintable code to a prompt. This is impactful because it will be harder for tools that rely almost exclusively on Claude Code and other agentic AI / LLM CLI tools to detect malware.
But I don't buy it. First of all the prompt itself is still fingerprintable, and second it's not very difficult to evade fingerprinting anyway. Especially on Linux.
sudo chattr -i $HOME/.shrc
sudo chattr -i $HOME/.profile
to make them immutable. I also added:
alias unlock-shrc="sudo chattr -i $HOME/.shrc"
alias lock-shrc="sudo chattr +i $HOME/.shrc"
To my profile to make it a bit easier to lock/unlock.
RCE implies ability to remotely execute arbitrary code on an affected system at will.
Yes, as I tried to make clear above, these are orthogonal. The supply chain attack is NOT an RCE, it's a delivery mechanism. The RCE is the execution of the attacker's code, regardless how it got there.
> RCE implies ability to remotely execute arbitrary code on an affected system at will.
We'll have to disagree on this one, unless one of us can cite a definition from a source we can agree on. Yes frequently RCE is something an attacker can push without requiring the user to do something, but I don't think that changes the nature of the fact that you are achieving remote code execution. Whether the user triggers the execution of your code by `npm install`ing your infected package or whether the attacker triggers it by sending an exploitative packet to a vulnerable network service isn't a big enough nuance in my opinion to make it not be RCE. From that perspective, the user had to start the vulnerable service in the first place, or even turn the computer on, so it still requires some user (not the attacker) action before it's vulnerable.