←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 3 comments | | HN request time: 0.001s | source
Show context
dfabulich ◴[] No.45034300[source]
Claude for Chrome seems to be walking right into the "lethal trifecta." https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

"The lethal trifecta of capabilities is:"

Access to your private data—one of the most common purposes of tools in the first place!

Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM

The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)

If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to that attacker.

replies(11): >>45034378 #>>45034587 #>>45034866 #>>45035318 #>>45035331 #>>45036212 #>>45036454 #>>45036497 #>>45036635 #>>45040651 #>>45041262 #
lionkor ◴[] No.45036497[source]
So far the accepted approach is to wrap all prompts in a security prompt that essentially says "please don't do anything bad".

> Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.

https://news.ycombinator.com/item?id=41864014

> - Inclusion prompt: User's travel preferences and food choices - Exclusion prompt: Credit card details, passport number, SSN etc.

https://news.ycombinator.com/item?id=41450212

> "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”

https://news.ycombinator.com/item?id=44444293

etc.

replies(5): >>45036557 #>>45036600 #>>45036808 #>>45039393 #>>45040976 #
1. JyB ◴[] No.45039393[source]
No one think any form of "prompt engineering" "guardrails" are serious security measures right?
replies(1): >>45039475 #
2. lionkor ◴[] No.45039475[source]
Check the links I posted :) Some do think that, yes.
replies(1): >>45042532 #
3. int0x29 ◴[] No.45042532[source]
We need regulation. The stubborn refusal to treat injection attacks seriously will cost a lot of people their data or worse.