←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 1 comments | | HN request time: 0.228s | source
Show context
medhir ◴[] No.45031022[source]
Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.

I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.

Anyone having success with local LLMs and browser use?

replies(3): >>45031462 #>>45031772 #>>45033430 #
1. rossant ◴[] No.45031772[source]
I imagine local LLMs are almost as dangerous as remote ones as they're prone to the same type of attacks.