←back to thread

186 points syntax-sherlock | 5 comments | | HN request time: 0.001s | source

I got tired of playwright-mcp eating through Claude's 200K token limit, so I built this using the new Claude Skills system. Built it with Claude Code itself.

Instead of sending accessibility tree snapshots on every action, Claude just writes Playwright code and runs it. You get back screenshots and console output. That's it.

314 lines of instructions vs a persistent MCP server. Full API docs only load if Claude needs them.

Same browser automation, way less overhead. Works as a Claude Code plugin or manual install.

Token limit issue: https://github.com/microsoft/playwright-mcp/issues/889

Claude Skills docs: https://docs.claude.com/en/docs/claude-code/skills

1. rapatel0 ◴[] No.45644391[source]
I think that this is actually the biggest threat to the current "AI bubble." Model efficiency and diffusion of models to open source. It's probably to start hedging bets on Nvidia
replies(1): >>45644892 #
2. philipallstar ◴[] No.45644892[source]
Why would OSS models threaten Nvidia?
replies(1): >>45645119 #
3. ISV_Damocles ◴[] No.45645119[source]
Most of the big OSS AI codebases (LLM and Diffusion, at least) have code to work on any GPU, not just nVidia GPUs, now. There's a slight performance benefit to sticking with nVidia, but once you need to split work across multiple GPUs, you can do a cost-benefit analysis and decide that, say, 12 AMD GPUs is faster than 8 nVidia GPUs and cheaper, as well.

Then nVidia's moat begins to shrink because they need to offer their GPUs at a somewhat reduced price to try to keep their majority share.

replies(2): >>45645464 #>>45649377 #
4. lmeyerov ◴[] No.45645464{3}[source]
Share can go up and down if consumption keeps going up crazily. We now spend more per dev on their personal use inferencing providers than their home devices, so inferencing chips are effectively their new personal computers...
5. epolanski ◴[] No.45649377{3}[source]
> There's a slight performance benefit to sticking with nVidia

In training, not in inference and not in perf/$.