the truly chilling part is using a local llm to find secrets. it's a new form of living off the land, where the malicious logic is in the prompt, not the code. this sidesteps most static analysis.
the entry point is the same old post-install problem we've never fixed, but the payload is next-gen. how do you even defend against malicious prompts?
replies(1):