Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.
I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.
Anyone having success with local LLMs and browser use?
replies(3):