←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 2 comments | | HN request time: 0.404s | source
Show context
rustc ◴[] No.45030857[source]
> Malicious actors can hide instructions in websites, emails, and documents that trick AI into taking harmful actions without your knowledge, including:

> * Accessing your accounts or files

> * Sharing your private information

> * Making purchases on your behalf

> * Taking actions you never intended

This should really be at the top of the page and not one full screen below the "Try" button.

replies(7): >>45030952 #>>45030955 #>>45031179 #>>45031318 #>>45031361 #>>45031563 #>>45032137 #
strange_quark ◴[] No.45030955[source]
It's insane how we're throwing out decades of security research because it's slightly annoying to have to write your own emails.
replies(14): >>45030996 #>>45031030 #>>45031080 #>>45031091 #>>45031141 #>>45031161 #>>45031177 #>>45031201 #>>45031273 #>>45031319 #>>45031527 #>>45031531 #>>45031599 #>>45033910 #
1. herval ◴[] No.45031531[source]
while at the same time talking nonstop about how "AI alignment" and "AI safety" are extremely important
replies(1): >>45031606 #
2. strange_quark ◴[] No.45031606[source]
Anthropic is the worst about this. Every product release they have is like "Here's 10 issues we found with this model, we tried to mitigate, but only got 80% of the way there. We think it's important to still release anyways, and this is definitely not profit motivated." I think it's because Anthropic is run by effective altruism AI doomers and operates as an insular cult.