Most active commenters
  • asdff(3)

←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 17 comments | | HN request time: 0.497s | source | bottom
1. stusmall ◴[] No.45033056[source]
It's wild to see an AI company put out a press release that is basically "hey, you kids wanna see a loaded gun?" Normally all their public coms are so full of optimism and salesmanship around the potential. They are fully aware of how dangerous this is.
replies(8): >>45033105 #>>45033148 #>>45033197 #>>45033279 #>>45033315 #>>45033347 #>>45033852 #>>45037231 #
2. ◴[] No.45033105[source]
3. raincole ◴[] No.45033148[source]
I think if it were made by OpenAI the presentation would be flowery and rosy.
4. erickhill ◴[] No.45033197[source]
Seems to be trying to explain why the rollout is going to be very focused and rather small at first so they can build the proper safeguards.

But it is a surprising read, you're absolutely right.

replies(2): >>45033224 #>>45033297 #
5. hsbauauvhabzb ◴[] No.45033224[source]
Safeguards for their profits and not the consumer or the websites they terrorize.
6. asdff ◴[] No.45033279[source]
> "We conducted extensive adversarial prompt injection testing, evaluating 123 test cases representing 29 different attack scenarios. "

Doesn't this seem like a remarkably small set of tests? And the fact that it took this testing to realize that prompt injection and giving the reigns to the AI agent is dangerous strikes me as strange that this wasn't anticipated while building the tool in the first place, before it even went to their red team.

Move fast and break things I guess. Only it is the worlds largest browser and the risk of breaking things means financial ruin and/or the end of the internet as we know it as a human to human communication tool.

replies(2): >>45033455 #>>45041764 #
7. asdff ◴[] No.45033297[source]
Letting their beta testers get pwned is an interesting opsec strategy indeed.
8. ankit219 ◴[] No.45033315[source]
This is what they need for the next generation of models. The key line is:

> We view browser-using AI as inevitable: so much work happens in browsers that giving Claude the ability to see what you're looking at, click buttons, and fill forms will make it substantially more useful.

A lot of this can be done by building a bunch of custom environments at training time, but only a limited number of usecases can be handled that way. They don't need the entire data, they still need the kind of tasks real world users would ask them to do.

Hence, the press release pretty much saying that they think it's unsafe, they don't have any clue how to make it safe without trying it out, and they would only want a limited number of people to try it out. Give their stature, it's good to do it publicly instead of how Google does it with trusted testers or Openai does it with select customers.

replies(1): >>45033410 #
9. hodgehog11 ◴[] No.45033347[source]
I noticed this with the OpenAI presentation for GPT-5 too; they just dove straight in to some of the less ethical applications (writing a eulogy, medical advice, etc.). But while the OpenAI presentation felt more like kids playing with a loaded gun, this feels more like inevitability: "we're already heading down this path anyway, so it may as well be us that does it right".
10. zaphirplane ◴[] No.45033410[source]
I don’t get the argument. Why is the loaded foot gun better in the hands of “select” customers better than in the hands of self selecting group of beta testers?
replies(1): >>45033684 #
11. whatevertrevor ◴[] No.45033455[source]
I wonder how this will even fare in the review process, or if the big AI players will get a free pass here. My intuition says that it's a risk that Google/Chrome absolutely don't want to own, it will be curious to see how "Agentic" AI gets deployed in browsers from a liability fallout perspective.
replies(1): >>45033510 #
12. asdff ◴[] No.45033510{3}[source]
Probably no liability considering that is how other phishing attempts are viewed.
replies(1): >>45033694 #
13. ankit219 ◴[] No.45033684{3}[source]
They are still gating it by usecase (I presume). But this way, they are not limited to the creativity of what their self selected group of beta testers could come up with, and perhaps look at security against a more diverse set of usecases. (I am assuming the trusted testers who work on security etc would anyway be given access).
14. whatevertrevor ◴[] No.45033694{4}[source]
But in other phishing attempts the user actually gives out their password (unintentionally) to an unscrupulous actor. In this case there's a middle-man (the AI extension) doing that for you, sometimes without even confirming with you what you want.

I think this is more akin to say a theoretical browser not implementing HTTPS properly so people's credentials/sessions can be stolen with MiTM attacks or something. Clearly the bad behavior is in the toolchain and not the user here, and I'm not sure how much you can wave away claiming "We told you it's not fully safe." You can't sell tomatoes that have a 10% chance of giving you food poisoning, even if you declare that chance on the label, you know?

15. SchemaLoad ◴[] No.45033852[source]
There was an interview from the CEO of one of those AI girlfriend apps and they say something along the lines of "Yeah if this tech continues along the path we are pushing towards thats actually pretty bad for society. Also our new model is out now, try it out!"

I don't know how these people sleep at night knowing they are actively ruining society.

16. eitland ◴[] No.45037231[source]
Only precedent I can remember right now (and this was before AI) was when Google launched Google Desktop Search and after the usual click through EULA there was a separate screen which started with something like "read this very carefully, this is not the normal yadda yadda" and then went on to explain about indexing our personal files.
17. fwip ◴[] No.45041764[source]
And even after their mitigations on known attacks, the attacks were still successful 11% of the time!

To misquote the IRA - "[Scammers] only need to be lucky once, you need to be lucky every time." Even a 1% chance of getting pwned every time you get sent a malicious email is way too high. Plus the scammers aren't gonna rest on their laurels - they'll be iterating too.