Most active commenters
  • strange_quark(3)
  • captainkrtek(3)

←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 50 comments | | HN request time: 0.001s | source | bottom
Show context
rustc ◴[] No.45030857[source]
> Malicious actors can hide instructions in websites, emails, and documents that trick AI into taking harmful actions without your knowledge, including:

> * Accessing your accounts or files

> * Sharing your private information

> * Making purchases on your behalf

> * Taking actions you never intended

This should really be at the top of the page and not one full screen below the "Try" button.

replies(7): >>45030952 #>>45030955 #>>45031179 #>>45031318 #>>45031361 #>>45031563 #>>45032137 #
1. strange_quark ◴[] No.45030955[source]
It's insane how we're throwing out decades of security research because it's slightly annoying to have to write your own emails.
replies(14): >>45030996 #>>45031030 #>>45031080 #>>45031091 #>>45031141 #>>45031161 #>>45031177 #>>45031201 #>>45031273 #>>45031319 #>>45031527 #>>45031531 #>>45031599 #>>45033910 #
2. captainkrtek ◴[] No.45030996[source]
The absolute disregard is astonishing. How big of an incident will it take for any restraint to exist? Folks on HN are at least somewhat informed of the risks and can make choices, but the typical user still expects some modicum of security when installing an app or using a service.
replies(1): >>45031512 #
3. echelon ◴[] No.45031030[source]
When we felt we were getting close to flight, people were jumping off buildings in wing suits.

And then, the Wright Bros. cracked the problem.

Rocketry, Apollo...

Same thing here. And it's bound to have the same consequences, both good and bad. Let's not forget how dangerous the early web was with all of the random downloadables and popups that installed exe files.

Evolution finds a way, but it leaves a mountain of bodies in the wake.

replies(3): >>45031159 #>>45031267 #>>45031383 #
4. rvz ◴[] No.45031080[source]
Then it's a great time to be a LLM security researcher then. Think about all the issues that attackers can do with these LLMs in the browser:

* Mislead agents to paying for goods with the wrong address

* Crypto wallets drained because the agent was told to send it to another wallet but it sent it to the wrong one.

* Account takeover via summarization, because a hidden comment told the agent additional hidden instructions.

* Sending your account details and passwords to another email address and telling the agent that the email was [company name] customer service.

All via prompt injection alone.

replies(2): >>45031379 #>>45031699 #
5. whatever1 ◴[] No.45031091[source]
Also IP and copyright is apparently no biggie. Sorry Aaron.
replies(2): >>45031431 #>>45031654 #
6. chankstein38 ◴[] No.45031141[source]
This comment kind of boils down the entire AI hype bubble into one succinct sentence and I appreciate it! Well said! You could basically put anything instead of "security" and find the same.
7. strange_quark ◴[] No.45031159[source]
> When we felt we were getting close to flight, people were jumping off buildings in wing suits. And then, the Wright Bros. cracked the problem.

Yeah they cracked the problem with a completely different technology. Letting LLMs do things in a browser autonomously is insane.

> Let's not forget how dangerous the early web was with all of the random downloadables and popups that installed exe files.

And now we are unwinding all of those mitigations all in the name of not having to write your own emails.

replies(1): >>45031376 #
8. ACCount37 ◴[] No.45031161[source]
Nothing new. We've allowed humans to use computers for ages.

Security-wise, this is closer to "human substitute" than it is to a "browser substitute". With all the issues of letting a random human have access to critical systems, on top of all the early AI tech jank. We've automated PEBKAC.

replies(1): >>45031290 #
9. jjice ◴[] No.45031177[source]
My theory is that the average user of an LLM is close enough to the average user of a computer and I've found that the general consensus is that security practices are "annoying" and "get in the way". The same kind of user who hates anything MFA and writes their password on a sticky note that they stick to their monitor in the office.
replies(2): >>45031370 #>>45032082 #
10. Jare ◴[] No.45031267[source]
I'm ok with individual pioneers taking high but informed risks in the name of progress. But this sounds like companies putting millions of users in wing suits instead.
replies(1): >>45031328 #
11. guelo ◴[] No.45031273[source]
No, it's because big tech has taken control of our data and locked it all down so we don't have control over it. AI browser automation is going to blow open all these militarized containers that use our own data and networks against us with the fig leaf of supposed security. I'm looking forward to the revival of personal data mashups like the old Yahoo Pipes.
replies(1): >>45031302 #
12. latexr ◴[] No.45031290[source]
I don’t know any human who’ll transfer their money or send their private information to a malicious third party because invisible text on a webpage says so.
replies(2): >>45031419 #>>45031487 #
13. pton_xd ◴[] No.45031302[source]
> AI browser automation is going to blow open all these militarized containers that use our own data against us.

I'm not sure what you mean by this. Do you mean that AI browser automation is going to give us back control over our data? How?

Aren't you starting a remote desktop session with Anthropic everytime you open your browser?

replies(2): >>45031338 #>>45031402 #
14. parhamn ◴[] No.45031319[source]
With regards to llm injection, we sorta need the cat and mouse games to play out a bit, no? I have my concerns but I'm not ready to throw out the baby with the bathwater. You could never release an OS if "no zero days" was a requirement. Every piece of software we use has and will have its vulnerabilities (see Apple's recent RCE), we play the arms race and things look asymptotically fine.

This seems to be the case in llms too. They're getting better and better (with a lot of research) at avoiding doing the bad things. I don't see why its fundamentally intractable to fence system/user/assistant/tool messages to prevent steering from non-trusted inputs, and building new fences for cases we want the steering.

Why is this piece of software particularly different?

replies(3): >>45031350 #>>45031401 #>>45031412 #
15. vunderba ◴[] No.45031328{3}[source]
Was just coming here to say that. Anyone who's familiar with the Mercury, Gemini and Apollo missions wouldn't characterize it as a technological evolution that left mountains of bodies in its wake. Yes, there were casualties (Apollo 1) but they were relatively minimal.
16. rvz ◴[] No.45031338{3}[source]
> Do you mean that AI browser automation is going to give us back control over our data? How?

Narrator: It won't.

17. freeone3000 ◴[] No.45031350[source]
Because the flaws are glaring, obvious, and easily avoidable.
18. woodrowbarlow ◴[] No.45031370[source]
it has been revelatory to me to realize that this is how most people want to interact with computers.

i want a computer to be predictable and repeatable. sometimes, i experience behavior that is surprising. usually this is an indication that my mental model does not match the computer model. in these cases, i investigate and update my mental model to match the computer.

most people are not willing to adjust their mental model. they want the machine to understand what they mean, and they're willing to risk some degree of lossy mis-communication which also corrupts repeatability.

maybe i'm naive but it wasn't until recently that i realized predictable determinism isn't actually something that people universally want from their personal computers.

replies(3): >>45031481 #>>45031645 #>>45031665 #
19. dingnuts ◴[] No.45031376{3}[source]
you also have to be a real asshole to send an email written by AI, at least if you speak the language fluently. If you can't take the time to choose your words what gives you the right to expect me to spend my precious life reading them?

if you send AI generated emails, please punch yourself in the face

replies(1): >>45031571 #
20. ◴[] No.45031379[source]
21. wrs ◴[] No.45031383[source]
The problem is exactly that we seem to have forgotten how dangerous the early web was and are blithely reproducing that history.
22. mynameismon ◴[] No.45031401[source]
At the same time, manufacturers do not release operating systems with extremely obvious flaws that have (atleast so far) no reasonable guardrails and pretend that they are the next messiah.
23. guelo ◴[] No.45031402{3}[source]
There's a million ways. Just off the top of my head: unified calendars, contacts and messaging across Google, Facebook, Microsoft, Apple, etc. The agent figures out which platform to go to and sends the message without you caring about the underlying platform.
24. asgraham ◴[] No.45031412[source]
First of all, you absolutely cannot release an OS with a known zero day. IANAL but that feels a lot like negligence that creates liability.

But even ignoring that, the gulf between zero days and plain-text LLM prompt injection is miles wide.

Zero days require intensive research to find, and expertise to exploit.

LLM prompt injections obviously exist a priori, and exploiting them requires only the ability to write.

replies(2): >>45031976 #>>45037142 #
25. captainkrtek ◴[] No.45031419{3}[source]
Yeah this isn’t a substitute, it’s automation taking action based on inputs the user may not even see, and doing it so fast without the likelihood a user would intervene.

If it’s a substitute its no better than trusting someone with the keys to your house, only for them to be easily instructed to rob your house by a 3rd party.

replies(1): >>45031514 #
26. mdaniel ◴[] No.45031431[source]
You left off the important qualifier: for corporations with monster legal teams. For people, different rules apply
27. mywacaday ◴[] No.45031481{3}[source]
I think most people don't want to interact with computers and people will use anything that reduces the amount of time spent and will be be embraced en-mass regardless of security or privacy issues.
28. ACCount37 ◴[] No.45031487{3}[source]
The only weird thing is the "invisible" part. The rest is consistent with known user behavior.
29. goosejuice ◴[] No.45031512[source]
A typical user also happily gives away all their personal information for free just to scroll through cat videos or see what % irish they are.

Even the HN crowd aimlessly runs curl | sh, npm i -g, and rando browser ext.

I agree, it's ridiculous but this isn't anything new.

30. rustc ◴[] No.45031514{4}[source]
This is like `curl | bash` but you automatically execute the code on every webpage you visit with full access to your browser.
replies(1): >>45031614 #
31. bbarnett ◴[] No.45031527[source]
I can accept a bit of form-letter from help desks, or in certain business cases. And the same for crafting a generic, informative letter being sent to thousands.

But as soon it gets one on one, the use of AI should almost be a crime. It certainly should be a social taboo. It's almost akin to talking to a person, one on one, and discovering they have a hidden earpiece, and are being prompted on how to respond.

And if I send an email to an employee, or conversely even the boss of a company I work for, I won't abide someone pretending to reply, but instead pasting junk from an AI. Ridiculous.

There isn't enough context in the world, to enable an AI to respond with clarity and historical knowledge, to such emails. People's value has to do as much with their institutional knowledge, shared corporate experiences, and personal background, not genericized AI responses.

It's kinda sad to come to a place, where you begin to think the Unibomber was right. (Though of course, his methods were wrong)

edit:

I've been hit by some downvotes. I've noticed that some portion of HN is exceptionally AI pro, but I suspect instead it may have something to do with my Unabomber comment.

For context, at least what I gathered from his manifesto, there was a deep distrust of machines, and how they were interfering with human communication and happiness.

Fast forward to social media, mobile phones, AI, and more... and he seems to have been on to something.

From wikipedia:

"He wrote that technology has had a destabilizing effect on society, has made life unfulfilling, and has caused widespread psychological suffering."

Again, clearly his methods were wrong. Yet I see the degradation of US politics into the most simplistic, team-centric, childish arguments... all best able to spread hate, anger, and rage on social media. I see people, especially youth deeply unhappy from their exposure to social media. I see people spending more time with an electronic box in their hand, than with fellow humans.

We always say that we should approach new technology with open eyes, but we seldom mean this about examining negatives. And as a society we've ignored warnings, and negatives with social media, with phones, and we are absolutely not better off as a result.

So perhaps we should use those lessons, and try to ensure that AI is a plus, not a minus in this new world?

For me, replacing intimate human communication with AI, replacing one-on-one conversations with the humans we work with, play with, are friends with, with AI? That's sad. So very, very, very sad.

Once, many years ago a friend of mine was upset. A conservative politician was going door to door, trying to get elected. This politician was railing against the fact that there was a park down the street, paid for by the city. He was upset that taxes paid for it, and that the city paid to keep it up.

Sure, this was true, but my friend after said to me "We're trying to have a society here!".

And I think that's part of what bugs me about AI. We're trying to have a society here!, and part of that is communicating with each other.

32. herval ◴[] No.45031531[source]
while at the same time talking nonstop about how "AI alignment" and "AI safety" are extremely important
replies(1): >>45031606 #
33. southwindcg ◴[] No.45031571{4}[source]
Agree, completely.

https://marketoonist.com/wp-content/uploads/2023/03/230327.n...

34. falcor84 ◴[] No.45031599[source]
> it's slightly annoying to have to write your own emails.

I find that to be a massive understatement. The amount of time, effort and emotional anguish that people expend on handling emails is astronomical. According to various estimates, email-handling takes somewhere around 25% of the work time of an average knowledge worker, going up to over 50% for some roles, and that most people check and reply to emails on evenings and over weekends at least occasionally.

I'm not sure it's possible, but it is my dream that I'd have a capable AI "secretary" that would process my email and respond in my tone based on my daily agenda, only interrupting for exceptional situations where I actually need to make a choice, or to pen a new idea to further my agenda.

replies(4): >>45031702 #>>45032499 #>>45043552 #>>45047715 #
35. strange_quark ◴[] No.45031606[source]
Anthropic is the worst about this. Every product release they have is like "Here's 10 issues we found with this model, we tried to mitigate, but only got 80% of the way there. We think it's important to still release anyways, and this is definitely not profit motivated." I think it's because Anthropic is run by effective altruism AI doomers and operates as an insular cult.
36. captainkrtek ◴[] No.45031614{5}[source]
Basically undoing years of effort to isolate web properties from affecting other properties.
37. williamscales ◴[] No.45031645{3}[source]
I think most people want computers to be predictable and repeatable _at a level that makes sense to them_. That's going to look different for non-programmers.

Having worked helping "average" users, my perception is that there is often no mental model at any level, let alone anywhere close to what HN folks have. Developing that model is something that most people just don't do in the first place. I think this is mostly because they have never really had the opportunity to and are more interested in getting things done quickly.

When I explain things like MFA in terms of why they are valuable, most folks I've helped see usefulness there and are willing to learn. The user experience is not close to universally seamless however which is a big hangup.

38. renewiltord ◴[] No.45031654[source]
Funny. According to you the only way to immortalize Aaron Schwartz is to entrench strongly the things he fought against. He died for a cause so it would be bad for the cause to win. Haha.
replies(1): >>45032275 #
39. brendoelfrendo ◴[] No.45031665{3}[source]
I think you're right, but I think the mental model of the average computer user does not assume that the computer is predictable and repeatable. Most conventional software will behave in the same way, every time, if you perform the same operations, but I think the average user views computers as black boxes that are fundamentally unpredictable. Complex tasks will have a learning curve, and there may be multiple paths that arrive at the same end result; these paths can also be changed at the will of the person who made the software, which is probably something the average user is used to in our days of auto-updating app stores, OS upgrades, and cloud services. The computer is still deterministic, but it doesn't feel that way when the interface is constantly shifting and all of the "complicated" bits that expose what the software is actually doing are obfuscated or removed (for user convenience, of course).
40. latexr ◴[] No.45031699[source]
> Then it's a great time to be a LLM security researcher then.

This reminded me of Jon Stewart’s Crossfire interview where they asked him “which candidate do you supposed would provide you better material if he won?” because he has “a stake in it that way, not just as citizen but as a professional comic”. Stewart answered he held the citizen part to be much more important.

https://www.youtube.com/watch?v=aFQFB5YpDZE&t=599s

I mean, yes, it’s “probably a great time to be an LLM security researcher” from a business standpoint, but it would be preferable if that didn’t have to be a thing.

41. Loic ◴[] No.45031702[source]
I am French living in Germany, the amount of time Claude saves me every week by reviewing the emails I send to contractors, customers is incredible. It is very hard to write good idiomatic German while ensuring no grammar and spelling mistakes.

I second you, just for that, I would continue paying for a subscription, that I can also use it for coding, toying with ideas, quickly look for information, extract information out of documents, everything out of a simple chat interface is incredible. I am old, but I live in the future now :-)

42. warkdarrior ◴[] No.45031976{3}[source]
> you absolutely cannot release an OS with a known zero day. IANAL but that feels a lot like negligence that creates liability.

You would think Microsoft, Apple, and Linux would have been sued like crazy by now over 0-days.

43. TeMPOraL ◴[] No.45032082[source]
> the general consensus is that security practices are "annoying" and "get in the way".

Because they usually are and they do.

> The same kind of user who hates anything MFA and writes their password on a sticky note that they stick to their monitor in the office.

This kind of user has a better feel for threat landscape than most armchair infosec specialists.

People go around security measures not out of some ill will or stupidity, but because those measures do not recognize the reality of the situation and tasks at hand.

With keeping passwords in the open or sharing them, this is common because most computer systems don't support delegation of authority - in fact, the very idea that I might want someone to do something in my name, is alien to many security people, and generally not supported explicitly, except for few cases around cloud computing. But delegation of authority is very common thing done by everyday people on many occasions. In real life, it's simple and natural to do. In digital world? Giving someone else your password is the only direct way to do this.

44. whatever1 ◴[] No.45032275{3}[source]
I don’t care about his cause. I care about the fact that I don’t see Altman or Dario being prosecuted and threatened with jail time.
replies(1): >>45032410 #
45. renewiltord ◴[] No.45032410{4}[source]
Yeah, things have changed. Turing was chemically castrated. Some do argue that gay people should be so treated today but I disagree.
46. edaemon ◴[] No.45032499[source]
Email is just communication. It seems appropriate that knowledge workers spend a lot of time communicating.
47. SchemaLoad ◴[] No.45033910[source]
What I suspect happens is that Apple ensures that apps can not be interacted with automatically, and anything sensitive like banking moves away from websites and purely app only where the compute environment integrity is verified and bot free.
48. knowannoes ◴[] No.45037142{3}[source]
>First of all, you absolutely cannot release an OS with a known zero day.

There is no such thing as a 'known zero day' vulnerability.

Zero day vulnerability means it is a newly discovered one. Today. The day zero.

49. polynomial ◴[] No.45043552[source]
Do you have any citations for various estimates? This is super interesting to me.
50. xenobeb ◴[] No.45047715[source]
At my job it takes about 50% of my time. I love LLMs but I don't see how they can possible help me with email.

I would have to write a prompt that is almost exactly the same as writing the email. It is not like I am writing a fictional story that the LLM could somehow compress the main ideas. I feel like the LLM would have to be able to read my mind to properly respond to my inbox.