←back to thread

441 points longcat | 1 comments | | HN request time: 0s | source
Show context
mdrzn ◴[] No.45039032[source]
the truly chilling part is using a local llm to find secrets. it's a new form of living off the land, where the malicious logic is in the prompt, not the code. this sidesteps most static analysis.

the entry point is the same old post-install problem we've never fixed, but the payload is next-gen. how do you even defend against malicious prompts?

replies(1): >>45040046 #
christophilus ◴[] No.45040046[source]
Run Claude Code in a locked down container or VM that has no access to sensitive data, and review all of the code it commits?
replies(2): >>45040397 #>>45041032 #
spacebanana7 ◴[] No.45041032[source]
Conceivably couldn’t a post install script be used for the malicious dependency to install its own instance of Claude code (or similar tool)?

In which case you couldn’t really separate your dev environment from a hostile LLM.

replies(2): >>45045892 #>>45051046 #
1. anon7000 ◴[] No.45045892[source]
Yes, though the attackers would have to pay for an account. In this case, it’s using a pre-installed, pre-authorized tool, using your own credits to hack you