←back to thread

645 points helloplanets | 1 comments | | HN request time: 0s | source
Show context
ec109685 ◴[] No.45005397[source]
It’s obviously fundamentally unsafe when Google, OpenAI and Anthropic haven’t released the same feature and instead use a locked down VM with no cookies to browse the web.

LLM within a browser that can view data across tabs is the ultimate “lethal trifecta”.

Earlier discussion: https://news.ycombinator.com/item?id=44847933

It’s interesting that in Brave’s post describing this exploit, they didn’t reach the fundamental conclusion this is a bad idea: https://brave.com/blog/comet-prompt-injection/

Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough. The only good mitigation they mention is that the agent should drop privileges, but it’s just as easy to hit an attacker controlled image url to leak data as it is to send an email.

replies(7): >>45005444 #>>45005853 #>>45006130 #>>45006210 #>>45006263 #>>45006384 #>>45006571 #
skaul ◴[] No.45006130[source]
(I lead privacy at Brave and am one of the authors)

> Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough.

No, we never claimed or believe that those will be enough. Those are just easy things that browser vendors should be doing, and would have prevented this simple attack. These are necessary, not sufficient.

replies(4): >>45006255 #>>45006329 #>>45006467 #>>45006601 #
cowboylowrez ◴[] No.45006255[source]
what you're saying is that the described step, "model alignment" is necessary even though it will fail a percentage of the time. whenever I see something that is "necessary" but doesn't have like a dozen 9's for reliability against failure or something well lets make that not necessary then. whadya say?
replies(1): >>45006309 #
skaul ◴[] No.45006309[source]
That's not how defense-in-depth works. If a security mitigation catches 90% of the "easy" attacks, that's worth doing, especially when trying to give users an extremely powerful capability. It just shouldn't be the only security measure you're taking.
replies(2): >>45006450 #>>45006710 #
cowboylowrez ◴[] No.45006450[source]
sure sure, except llms. I mean its valid and all bringing up tried and true maxims that we all should know regarding software, but whens the last time the ssl guys were happy with a fix that "has a chance of working, but a chance of not working."

defense in depth is to prevent one layer failure from getting to the next, you know, exploit chains etc. Failure in a layer is a failure, not statistically expected behavior. we fix bugs. what we need to do is treat llms as COMPLETELY UNTRUSTED user input as has been pointed out here and elsewhere time and again.

you reply to me like I need to be lectured, so consider me a dumb student in your security class. what am I missing here?

replies(3): >>45006676 #>>45006878 #>>45008574 #
ModernMech ◴[] No.45006878[source]
> what am I missing here?

I guess what I don't understand is that failure is always expected because nothing is perfect, so why isn't the chance of failure modeled and accounted for? Obviously you fix bugs, but how many more bugs are in there you haven't fixed? To me, "we fix bugs" sounds the same as "we ship systems with unknown vulnerabilities".

What's the difference between a purportedly "secure" feature with unknown, unpatched bugs; and an admittedly insecure feature whose failure modes are accounted for through system design taking that insecurity into account, rather than pretending all is well until there's a problem that surfaces due to unknown exploits?

replies(2): >>45007387 #>>45008668 #
1. wat10000 ◴[] No.45008668[source]
The “secure” system with unknown bugs can fix them once they become known. The system that’s insecure by design and tries to mitigate it can’t be fixed, by design.

There might be a zero-day bug in my browser which allows an attacker to steal my banking info and steal my money. I’m not very worried about this because I know that if such a thing is discovered, Apple is going to fix it quickly. And it’s going to be such a big deal that it’s going to make the news, so I’ll know about it and I can make an informed decision about what to do while I wait for that fix.

Computer security is fundamentally about separating code from data. Security vulnerabilities are almost always bugs that break through that separation. It may be direct, like with a buffer overflow into executable memory or a SQL injection, or it may be indirect with ROP and such. But one way or another, it comes down to getting the target to run code it’s not supposed to.

LLMs are fundamentally designed such that there is no barrier between the two. There’s no code over here and data over there. The instructions are inherently part of the data.