←back to thread

144 points anerli | 5 comments | | HN request time: 0.001s | source

Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (https://news.ycombinator.com/item?id=43796003).

We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.

You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.

Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:

- Drag and drop interactions

- Data visualizations, charts, and tables

- Legacy apps with nested iframes

- Canvas and webGL-heavy sites (like design tools or photo editing)

- Remote desktops streamed into the browser

To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.

Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.

```ts

// Magnitude can handle high-level tasks

await agent.act('Create an issue', {

  // Optionally pass data that the agent will use where appropriate

  data: {

    title: 'Use Magnitude',

    description: 'Run "npx create-magnitude-app" and follow the instructions',

  },
});

// It can also handle low-level actions

await agent.act('Drag "Use Magnitude" to the top of the in progress column');

// Intelligently extract data based on the DOM content matching a provided zod schema

const tasks = await agent.extract(

    'List in progress issues',

    z.array(z.object({

        title: z.string(),

        description: z.string(),

        // Agent can extract existing data or new insights

        difficulty: z.number().describe('Rate the difficulty between 1-5')

    })),
);

```

We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!

Repo: https://github.com/magnitudedev/magnitude

Show context
axlee ◴[] No.44391985[source]
Using this for testing instead of regular playwright must 10000x the cost and speed, doesn't it? At which points do the benefits outweigh the costs?
replies(1): >>44392063 #
anerli ◴[] No.44392063[source]
I think depends a lot on how much you value your own time, since its quite time consuming to write and update playwright scripts. It's gonna save you developer hours to write automations using natural language rather than messing around with and fixing selectors. It's also able to handle tasks that playwright wouldn't be able to do at all - like extracting structured data from a messy/ambiguous DOM and adapting automatically to changing situations.

You can also use cheaper models depending on your needs, for example Qwen 2.5 VL 72B is pretty affordable and works pretty well for most situations.

replies(2): >>44392213 #>>44394339 #
plufz ◴[] No.44392213[source]
But we can use an LLM to write that script though and give that agent access to a browser to find DOM selectors etc. And than we have a stable script where we, if needed, manually can fix any LLM bugs just once…? I’m sure there are use cases with messy selectors as you say, but for me it feels like most cases are better covered by generating scripts.
replies(1): >>44392400 #
1. anerli ◴[] No.44392400[source]
Yeah we've though about this approach a lot - but the problem is if your final program is a brittle script, you're gonna need a way to fix it again often - and then you're still depending on recurrently using LLMs/agents. So we think its better to have the program itself be resilient to change instead of you/your LLM assistant having to constantly ensure the program is working.
replies(2): >>44393132 #>>44394068 #
2. adenta ◴[] No.44393132[source]
I wonder if a nice middle ground would be: - recording the playwright behind the scenes and storing - trying that as a “happy path” first attempt to see if it passes - if it doesn’t pass, rebuilding it with the AI and vision models

Best of both worlds. The playwright is more of a cache than a test

replies(1): >>44393186 #
3. anerli ◴[] No.44393186[source]
I think the difficulty with this approach is (1) you want a good "lookup" mechanism - given a task, how do you know what cache should be loaded? you can do a simple string lookup based on the task content, but when the task might include parameters or data, or be a part of a bigger workflow, it gets trickier. (2) you need a good way to detect when to adapt / fall back to the LLM. When the cache is only a playwright script, it can be difficult to know when it falls out of the existing trajectory. You can check for selector timeouts and things, but you might be missing a lot of false negatives.
4. lyime ◴[] No.44394068[source]
Are you sure? Couldnt you just just go back to the LLM if the script breaks? Pages changes but not that often in general.

It seems like a hybrid approach would scale better and be significantly cheaper.

replies(1): >>44394319 #
5. anerli ◴[] No.44394319[source]
We do believe in a hybrid approach where a fast/deterministic representation is saved - but think there is a more seamless way were the framework itself is high level and manages these details by caching the underlying actions that can run