←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 1 comments | | HN request time: 0s | source
Show context
rustc ◴[] No.45030857[source]
> Malicious actors can hide instructions in websites, emails, and documents that trick AI into taking harmful actions without your knowledge, including:

> * Accessing your accounts or files

> * Sharing your private information

> * Making purchases on your behalf

> * Taking actions you never intended

This should really be at the top of the page and not one full screen below the "Try" button.

replies(7): >>45030952 #>>45030955 #>>45031179 #>>45031318 #>>45031361 #>>45031563 #>>45032137 #
mikojan ◴[] No.45031361[source]
Can somebody explain this security problem to me please.

How is there not an actual deterministic traditionally programmed layer in-between the LLM and whatever it wants to do? That layer shows you exactly what changes it is going to apply and it is going to ask you for confirmation.

What is the actual problem here?

replies(2): >>45033117 #>>45037168 #
1. raincole ◴[] No.45033117[source]
How are you going to present this information to users? I mean average users, not programmers.

LLM: I'm going to call the click event on: {spewing out a bunch of raw DOM).

Not like this, right?

If you can design an 'actual deterministic traditionally programmed layer' that presents what's actually happening at lower level in a user-friendly way and make it work for arbitrary websites, you'll get Turing Award. Actually Turing Award is downplaying your achievement. You'll be remembered as someone who invented (not even 'reinvented') the web.