1. Why not ask a model if inputs (e.g. stuff coming from the browser) contains a prompt injection attack? Maybe comparing input to the agent's planned actions and seeing if they match? (if so, that seems suspicious)
2. It seems browser use agents try to read the DOM or use images, which eats a lot of context. What's the reason not to use accessibility features instead first (other than websites that do not have good accessibility design)? Seems a screen reader and an LLM have a lot in common, needing to pull relevant information and actions on a webpage via text