We know there's a lot of noise about different browser agents. If you've tried any of them, you know they're slow, expensive, and inconsistent. That's why we built an agent specifically for running test cases and optimized it just for that:
- Pure vision instead of error prone "set-of-marks" system (the colorful boxes you see in browser-use for example)
- Use tiny VLM (Moondream) instead of OpenAI/Anthropic computer use for dramatically faster and cheaper execution
- Use two agents: one for planning and adapting test cases and one for executing them quickly and consistently.
The idea is the planner builds up a general plan which the executor runs. We can save this plan and re-run it with only the executor for quick, cheap, and consistent runs. When something goes wrong, it can kick back out to the planner agent and re-adjust the test.
It’s completely open source. Would love to have more people try it out and tell us how we can make it great.
Of course that would be even more valuable for testing your MCP or A2A services, but could be useful for UI as well. Or it could be useless. It would be interesting to see if the same UI changes affect both human and AI success rate in the same way.
And if not, could an AI be trained to correlate more closely to human behavior. That could be a good selling point if possible.
But what determines that the UI has changed for a specific URL? Your software independent of the planner LLM or do you require the visual LLM to make a determination of change?
You should also stop saying 100% open source when test plan generation and execution depend on non-open source AI components. It just doesn’t make sense.
We say 100% open source because all of our code (test runner and AI agents) is completely open source. It’s also completely possible to run an entire OSS stack because you can configure with an open source planner LLM, and Moondream is open source. You could run it all locally even if you have solid hardware.