Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?
Our browser is made of the following main components:
- an HTTP loader
- an HTML parser and DOM tree (based on Netsurf libs)
- a Javascript runtime (v8)
- partial web APIs support (currently DOM and XHR/Fetch)
- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).
The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).
In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.
It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.
We chose Zig for its seamless integration with C libs and its comptime feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib https://github.com/lightpanda-io/zig-js-runtime). And of course for its performance :)
As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.
We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?
That's a problem of misaligned economic incentives. If there is a blockchain which enables micro-transactions of 0.000001 cent per request, and in the order of a million tps or a billion tps, then servers have no reason not to accept money in exchange for information, instead of using ads to extract some eyeball attention.
There is no reason that i cannot invoke a command line program: `$fetch_social_media_posts -n 1000` and get the last thousand posts right there in the console, as long as i provide some valid transactions to the server.
Websites and ads are the wrong solution to the problem of gaining something while serving information, and headless browsers and scraping are the wrong solution to the first wrong solution and the problems it creates.
If there are internet payments with a minimum payment of 1 cent, then we need payments of 0.1 cents. If that's achieved, then we need 0.01 cents minimum transaction. The micro in the transaction always needs to be smaller (and faster).
Free competition (or perfect competition) over a well defined landscape, internet protocols that is, has proven to always deliver better quality goods and lower price. Money derived from governments is far, far from free competition, let alone well defined internet protocols, and there is a point in which existing payment methods get stuck and cannot deliver smaller transactions.
I don't personally know where and when that point is, but if i have to guess, existing payment methods have reached that minimum point for at least a decade. In other words, their transactions minimums have to be high enough for them to make a profit. Yes, they can implement microtransactions, but they will not be profitable.