←back to thread

313 points felarof | 2 comments | | HN request time: 0.656s | source

Hi HN - we're Nithin and Nikhil, twin brothers and founders of nxtscape.ai (YC S24). We're building Nxtscape ("next-scape") - an open-source, agentic browser for the AI era.

-- Why bother building a new browser? For the first time since Netscape was released in 1994, it feels like we can reimagine browsers from scratch for the age of AI agents. The web browser of tomorrow might not look like what we have today.

We saw how tools like Cursor gave developers a 10x productivity boost, yet the browser—where everyone else spends their entire workday—hasn't fundamentally changed.

And honestly, we feel like we're constantly fighting the browser we use every day. It's not one big thing, but a series of small, constant frustrations. I'll have 70+ tabs open from three different projects and completely lose my train of thought. And simple stuff like reordering tide pods from amazon or filling out forms shouldn't need our full attention anymore. AI can handle all of this, and that's exactly what we're building.

Here’s a demo of our early version https://dub.sh/nxtscape-demo

-- What makes us different We know others are exploring this space (Perplexity, Dia), but we want to build something open-source and community-driven. We're not a search or ads company, so we can focus on being privacy-first – Ollama integration, BYOK (Bring Your Own Keys), ad-blocker.

Btw we love what Brave started and stood for, but they've now spread themselves too thin across crypto, search, etc. We are laser-focused on one thing: making browsers work for YOU with AI. And unlike Arc (which we loved too but got abandoned), we're 100% open source. Fork us if you don't like our direction.

-- Our journey hacking a new browser To build this, we had to fork Chromium. Honestly, it feels like the only viable path today—we've seen others like Brave (started with electron) and Microsoft Edge learn this the hard way.

We also started with why not just build an extension. But realized we needed more control. Similar to the reason why Cursor forked VSCode. For example, Chrome has this thing called the Accessibility Tree - basically a cleaner, semantic version of the DOM that screen readers use. Perfect for AI agents to understand pages, but you can't use it through extension APIs.

That said, working with the 15M-line C++ chromium codebase has been an adventure. We've both worked on infra at Google and Meta, but Chromium is a different beast. Tools like Cursor's indexing completely break at this scale, so we've had to get really good with grep and vim. And the build times are brutal—even with our maxed-out M4 Max MacBook, a full build takes about 3 hours.

Full disclosure: we are still very early, but we have a working prototype on GitHub. It includes an early version of a "local Manus" style agent that can automate simple web tasks, plus an AI sidebar for questions, and other productivity features (grouping tabs, saving/resuming sessions, etc.).

Looking forward to any and all comments!

You can download the browser from our github page: https://github.com/nxtscape/nxtscape

Show context
xena ◴[] No.44329772[source]
Do you respect robots.txt?
replies(2): >>44329854 #>>44332349 #
felarof ◴[] No.44329854[source]
No, not today.

But wonder if it matter if it the agent is mostly using it for "human" use cases and not scrapping?

replies(6): >>44329974 #>>44330004 #>>44330103 #>>44331512 #>>44332369 #>>44332715 #
mattigames ◴[] No.44330103[source]
What do you mean? This AI cannot scrape multiple links automatically? Like "make a summary of all the recipes linked in this page" kind of stuff? If it can it definitely meets the definition of scraping.
replies(1): >>44330560 #
grepexdev ◴[] No.44330560[source]
I think what he means is it is not just generally crawling and scraping, and uses a more targeted approach. Equivalent to a user going to each of those sites, just more efficiently.
replies(2): >>44331499 #>>44331986 #
vharish ◴[] No.44331986[source]
I'm guessing that would ideally mean only reading the content the user would otherwise have gone through. I wonder if that's the case and if it's guaranteed.

Maybe some new standards and maybe a user configurable per site permissions may make it better?

I'm curious to see how this will turn out to be.

replies(1): >>44332395 #
lolinder ◴[] No.44332395[source]
> only reading the content the user would otherwise have gone through.

Why? My user agent is configured to make things easier for me and allow me to access content that I wouldn't otherwise choose to access. Dark mode allows me to read late at night. Reader mode allows me to read content that would otherwise be unbearably cluttered. I can zoom in on small text to better see it.

Should my reader mode or dark mode or zoom feature have to respect robots.txt because otherwise they'd allow me to access content that I would otherwise have chosen to leave alone?

replies(1): >>44332626 #
mattigames ◴[] No.44332626[source]
Yeah no, nothing of that helps you bypass the ads on their website*, but scraping and summarizing does, so its wildly different for monetization purposes, and in most cases that means the maintainability and survival of any given website.

I know its not completely true, I know reader mode can help you bypass the ads _after_ you already had a peek at the cluttered version, but if you need to go to the next page or something like that you need to disable reader-mode once and so on, so its a very granular ad-blocking while many AI use cases are about bypassing viewing it at all by a human; and the other thing is that reader mode is not very popular so its not a significant threat.

*or other links on their websites, or informative banners, etc

replies(3): >>44332684 #>>44332694 #>>44332806 #
debazel ◴[] No.44332806[source]
robots.txt is not there to protect your ad-based business model. It's meant for automated scrapers that recursively retrieve all pages on your website, which this browser is not doing at all. What a user does with a page after it has entered their browser is their own prerogative.
replies(1): >>44332884 #
mattigames ◴[] No.44332884[source]
>It's meant for automated scrapers that recursively retrieve all pages on your website, _which this browser is not doing at all_

AFAIK this is false, and this browser can do things like "summarize all the cooking recipes linked in this page" and therefore act exactly like a scraper (even if at smaller scale than most scrapers)

If tomorrow magically all phones and all computers had an ad-blocking browser installed -and set as the default browser- a big chunk of the economy would collapse, so while I can see the philosophical value of "What a user does with a page after it has entered their browser is their own prerogative", the pragmatic in me knows that if all users cared about that and enforced it it would have grave repercussions in the livelihood of many.

replies(1): >>44332910 #
lolinder ◴[] No.44332910[source]
https://www.robotstxt.org/faq/what.html

> A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

There's nothing recursive about "summarize all the cooking recipes linked on this page". That's a single-level iterative loop.

I will grant that I should alter my original statement: if OP wanted to respect robots.txt when it receives a request that should be interpreted as an instruction to recursively fetch pages, then I'd think that's an appropriate use of robots.txt, because that's not materially different than implementing a web crawler by hand in code.

But that represents a tiny subset of the queries that will go through a tool like this and respecting robots.txt for non-recursive requests would lead to silly outcomes like the browser refusing to load reddit.com [0].

[0] https://www.reddit.com/robots.txt

replies(1): >>44332996 #
1. mattigames ◴[] No.44332996[source]
The concept of robots.txt was created in a different time, when nobody envisioned that users would one day use commands written in plain English sentences to interact with websites (including interacting with multiple pages with such commands), so the discussion about if AI browsers should respect it or if they should not is senseless, and instead -if this kind of usage takes off- it would probably make more sense to have a new standard for such use cases, something like AI-browsers.txt to make clear the intent of blocking (or not) AI browsing capabilities.
replies(1): >>44333100 #
2. lolinder ◴[] No.44333100[source]
Alright, I think we can agree on that. I'll see you over in that new standardization discussion fighting fiercely for protections to make sure companies don't abuse it to compromise the open web.