←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 1 comments | | HN request time: 0s | source
Show context
rustc ◴[] No.45030857[source]
> Malicious actors can hide instructions in websites, emails, and documents that trick AI into taking harmful actions without your knowledge, including:

> * Accessing your accounts or files

> * Sharing your private information

> * Making purchases on your behalf

> * Taking actions you never intended

This should really be at the top of the page and not one full screen below the "Try" button.

replies(7): >>45030952 #>>45030955 #>>45031179 #>>45031318 #>>45031361 #>>45031563 #>>45032137 #
strange_quark ◴[] No.45030955[source]
It's insane how we're throwing out decades of security research because it's slightly annoying to have to write your own emails.
replies(14): >>45030996 #>>45031030 #>>45031080 #>>45031091 #>>45031141 #>>45031161 #>>45031177 #>>45031201 #>>45031273 #>>45031319 #>>45031527 #>>45031531 #>>45031599 #>>45033910 #
rvz ◴[] No.45031080[source]
Then it's a great time to be a LLM security researcher then. Think about all the issues that attackers can do with these LLMs in the browser:

* Mislead agents to paying for goods with the wrong address

* Crypto wallets drained because the agent was told to send it to another wallet but it sent it to the wrong one.

* Account takeover via summarization, because a hidden comment told the agent additional hidden instructions.

* Sending your account details and passwords to another email address and telling the agent that the email was [company name] customer service.

All via prompt injection alone.

replies(2): >>45031379 #>>45031699 #
1. ◴[] No.45031379[source]