←back to thread

186 points syntax-sherlock | 1 comments | | HN request time: 0.001s | source

I got tired of playwright-mcp eating through Claude's 200K token limit, so I built this using the new Claude Skills system. Built it with Claude Code itself.

Instead of sending accessibility tree snapshots on every action, Claude just writes Playwright code and runs it. You get back screenshots and console output. That's it.

314 lines of instructions vs a persistent MCP server. Full API docs only load if Claude needs them.

Same browser automation, way less overhead. Works as a Claude Code plugin or manual install.

Token limit issue: https://github.com/microsoft/playwright-mcp/issues/889

Claude Skills docs: https://docs.claude.com/en/docs/claude-code/skills

Show context
Rooster61 ◴[] No.45644552[source]
I have a few questions about test frameworks that use AI services like this.

1)The examples always seem very generic: "Test Login Functionality, check if search works, etc". Do these actually work well at all once you step outside of the basic smoketest use cases?

2) How to you prevent proprietary data from being read when you are just foisting snapshots over to the AI provider? There's no way I'd be able to use this in any kind of real application where data privacy is a constraint.

replies(3): >>45644611 #>>45644980 #>>45645114 #
1. siva7 ◴[] No.45645114[source]
> Do these actually work well at all once you step outside of the basic smoketest use cases?

Excellent question... no, beyond basic kindergarten stuff playwright (with AI) falls quickly apart. Have some OAuth? Good luck configuring playwright for your exact setup. Need to synthesize all information available from logs and visuals to debug something? Good luck..