←back to thread

144 points anerli | 1 comments | | HN request time: 0.384s | source

Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (https://news.ycombinator.com/item?id=43796003).

We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.

You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.

Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:

- Drag and drop interactions

- Data visualizations, charts, and tables

- Legacy apps with nested iframes

- Canvas and webGL-heavy sites (like design tools or photo editing)

- Remote desktops streamed into the browser

To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.

Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.

```ts

// Magnitude can handle high-level tasks

await agent.act('Create an issue', {

  // Optionally pass data that the agent will use where appropriate

  data: {

    title: 'Use Magnitude',

    description: 'Run "npx create-magnitude-app" and follow the instructions',

  },
});

// It can also handle low-level actions

await agent.act('Drag "Use Magnitude" to the top of the in progress column');

// Intelligently extract data based on the DOM content matching a provided zod schema

const tasks = await agent.extract(

    'List in progress issues',

    z.array(z.object({

        title: z.string(),

        description: z.string(),

        // Agent can extract existing data or new insights

        difficulty: z.number().describe('Rate the difficulty between 1-5')

    })),
);

```

We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!

Repo: https://github.com/magnitudedev/magnitude

Show context
mertunsall ◴[] No.44394532[source]
In browser-use, we combine vision + browser extraction and we find that this gives the most reliable agent: https://github.com/browser-use/browser-use :)

We recently gave the model access to a file system so that it never forgets what it's supposed to do - we already have ton of users very happy with recent reliability updates!

We also have a beta workflow-use, which is basically what's mentioned in the comments here to "cache" a workflow: https://github.com/browser-use/workflow-use

Let us know what you guys think - we are shipping hard and fast!

replies(1): >>44398292 #
anerli ◴[] No.44398292[source]
So there’s a very big difference in the sort of vision approach that browser-use does vs. what we do

browser-use is still strongly coupled to the DOM for interaction because of the set-of-marks approach it uses (for context - those little rainbow boxes you see around the elements). This means it’s very difficult to get it to reliably do interactions outside of straightforward click/type like drag and drop, interacting with canvas, etc.

Since we interact based purely on what we see on the screen using pixel coordinates, those sort of interactions are a lot more natural to us and perform much more reliably. If you don't believe me, I encourage you to try to get both Magnitude and browser-use to drag and drop cards on a Kanban board :)

Regardless, best of luck!

replies(1): >>44403914 #
1. nikisweeting ◴[] No.44403914[source]
In our experience the DOM-based interaction is more repeatable and performant than vision / xy based, but they each have their tradeoffs, as you said click-and-drag is harder when the source and target arent classic dom elements (e.g. canvas). We'll likely add x,y-based interaction as a fallback method at some point.