←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 6 comments | | HN request time: 0.207s | source | bottom
1. medhir ◴[] No.45031022[source]
Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.

I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.

Anyone having success with local LLMs and browser use?

replies(3): >>45031462 #>>45031772 #>>45033430 #
2. alienbaby ◴[] No.45031462[source]
I'm not sure how running inference locally will make any difference whatsoever? or do you also mean hosting the MCP tools it has access to?
3. rossant ◴[] No.45031772[source]
I imagine local LLMs are almost as dangerous as remote ones as they're prone to the same type of attacks.
4. onesociety2022 ◴[] No.45033430[source]
The primary risk with these browser agents is prompt injection attacks. Running it locally doesn't help you in that regard.
replies(2): >>45034703 #>>45034985 #
5. innagadadavida ◴[] No.45034703[source]
If each LLM sessions is linked to the domain and restricted just like how we restrict cross domain communication, this problem can be solved? We can have a completely isolated LLM context per each domain.
6. medhir ◴[] No.45034985[source]
True, I wasn’t thinking very deeply when I wrote this comment… local models indeed are prone to the same exploits.

Regardless, giving a remote API access to a browser seems insane. Having had a chance to reflect, I’d be very wary of providing any LLM access to take actions with my personal computer. Sandbox the hell out of these things.