←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 3 comments | | HN request time: 0.239s | source
Show context
medhir ◴[] No.45031022[source]
Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.

I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.

Anyone having success with local LLMs and browser use?

replies(3): >>45031462 #>>45031772 #>>45033430 #
1. onesociety2022 ◴[] No.45033430[source]
The primary risk with these browser agents is prompt injection attacks. Running it locally doesn't help you in that regard.
replies(2): >>45034703 #>>45034985 #
2. innagadadavida ◴[] No.45034703[source]
If each LLM sessions is linked to the domain and restricted just like how we restrict cross domain communication, this problem can be solved? We can have a completely isolated LLM context per each domain.
3. medhir ◴[] No.45034985[source]
True, I wasn’t thinking very deeply when I wrote this comment… local models indeed are prone to the same exploits.

Regardless, giving a remote API access to a browser seems insane. Having had a chance to reflect, I’d be very wary of providing any LLM access to take actions with my personal computer. Sandbox the hell out of these things.