←back to thread

Claude for Chrome

(www.anthropic.com)
795 points davidbarker | 1 comments | | HN request time: 0.233s | source
Show context
aliljet ◴[] No.45030980[source]
Having played a LOT with browser use, playwright, and puppeteer (all via MCP integrations and pythonic test cases), it's incredibly clear how quickly Claude (in particular) loses the thread as it starts to interact with the browser. There's a TON of visual and contextual information that just vanishes as you begin to do anything particularly complex. In my experience, repeatedly forcing new context windows between screenshots has dramatically improved the ability for claude to perform complex intearctions in the browser, but it's all been pretty weak.

When Claude can operate in the browser and effectively understand 5 radio buttons in a row, I think we'll have made real progress. So far, I've not seen that eval.

replies(7): >>45031153 #>>45031164 #>>45031750 #>>45032251 #>>45033961 #>>45034552 #>>45036980 #
jascha_eng ◴[] No.45032251[source]
I have built a custom "deep research" internally that uses puppeteer to find business information, tech stack and other information about a company for our sales team.

My experience was that giving the LLM a very limited set of tools and no screenshots worked pretty damn well. Tbf for my use case I don't need more interactivity than navigate_to_url and click_link. Each tool returning a text version of the page and the clickable options as an array.

It is very capable of answering our basic questions. Although it is powered by gpt-5 not claude now.

replies(3): >>45032764 #>>45033355 #>>45033832 #
1. felarof ◴[] No.45033832[source]
This is super cool!

If a "deep research" like agent is available directly in your browser, would that be useful?

We are building this at BrowserOS!