←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 3 comments | | HN request time: 0.691s | source
Show context
dfabulich ◴[] No.45034300[source]
Claude for Chrome seems to be walking right into the "lethal trifecta." https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

"The lethal trifecta of capabilities is:"

Access to your private data—one of the most common purposes of tools in the first place!

Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM

The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)

If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to that attacker.

replies(11): >>45034378 #>>45034587 #>>45034866 #>>45035318 #>>45035331 #>>45036212 #>>45036454 #>>45036497 #>>45036635 #>>45040651 #>>45041262 #
afarviral ◴[] No.45034587[source]
How would you go about making it more secure but still getting to have your cake too? Off the top my head, could you: a) only ingest text that can be OCRd or somehow determine if it is human readable b) make it so text from the web session is isolated from the model with respect to triggering an action. Then it's simply a tradeoff at that point.
replies(3): >>45034626 #>>45035055 #>>45035249 #
1. csomar ◴[] No.45035249[source]
In the future, any action with consequence will require crypto-withdrawal levels of security. Maybe even a face scan before you can complete it.
replies(1): >>45036301 #
2. ares623 ◴[] No.45036301[source]
Ahh technology. The cause of, and _solution to_, all of life’s problems.
replies(1): >>45036531 #
3. ◴[] No.45036531[source]