←back to thread

313 points felarof | 3 comments | | HN request time: 0.604s | source

Hi HN - we're Nithin and Nikhil, twin brothers and founders of nxtscape.ai (YC S24). We're building Nxtscape ("next-scape") - an open-source, agentic browser for the AI era.

-- Why bother building a new browser? For the first time since Netscape was released in 1994, it feels like we can reimagine browsers from scratch for the age of AI agents. The web browser of tomorrow might not look like what we have today.

We saw how tools like Cursor gave developers a 10x productivity boost, yet the browser—where everyone else spends their entire workday—hasn't fundamentally changed.

And honestly, we feel like we're constantly fighting the browser we use every day. It's not one big thing, but a series of small, constant frustrations. I'll have 70+ tabs open from three different projects and completely lose my train of thought. And simple stuff like reordering tide pods from amazon or filling out forms shouldn't need our full attention anymore. AI can handle all of this, and that's exactly what we're building.

Here’s a demo of our early version https://dub.sh/nxtscape-demo

-- What makes us different We know others are exploring this space (Perplexity, Dia), but we want to build something open-source and community-driven. We're not a search or ads company, so we can focus on being privacy-first – Ollama integration, BYOK (Bring Your Own Keys), ad-blocker.

Btw we love what Brave started and stood for, but they've now spread themselves too thin across crypto, search, etc. We are laser-focused on one thing: making browsers work for YOU with AI. And unlike Arc (which we loved too but got abandoned), we're 100% open source. Fork us if you don't like our direction.

-- Our journey hacking a new browser To build this, we had to fork Chromium. Honestly, it feels like the only viable path today—we've seen others like Brave (started with electron) and Microsoft Edge learn this the hard way.

We also started with why not just build an extension. But realized we needed more control. Similar to the reason why Cursor forked VSCode. For example, Chrome has this thing called the Accessibility Tree - basically a cleaner, semantic version of the DOM that screen readers use. Perfect for AI agents to understand pages, but you can't use it through extension APIs.

That said, working with the 15M-line C++ chromium codebase has been an adventure. We've both worked on infra at Google and Meta, but Chromium is a different beast. Tools like Cursor's indexing completely break at this scale, so we've had to get really good with grep and vim. And the build times are brutal—even with our maxed-out M4 Max MacBook, a full build takes about 3 hours.

Full disclosure: we are still very early, but we have a working prototype on GitHub. It includes an early version of a "local Manus" style agent that can automate simple web tasks, plus an AI sidebar for questions, and other productivity features (grouping tabs, saving/resuming sessions, etc.).

Looking forward to any and all comments!

You can download the browser from our github page: https://github.com/nxtscape/nxtscape

Show context
xena ◴[] No.44329772[source]
Do you respect robots.txt?
replies(2): >>44329854 #>>44332349 #
felarof ◴[] No.44329854[source]
No, not today.

But wonder if it matter if it the agent is mostly using it for "human" use cases and not scrapping?

replies(6): >>44329974 #>>44330004 #>>44330103 #>>44331512 #>>44332369 #>>44332715 #
qualeed ◴[] No.44331512[source]
There's no reason not to respect it.

If your browser behaves, it's not going to be excluded in robots.txt.

If your browser doesn't behave, you should at least respect robots.txt.

If your browser doesn't behave, and you continue to ignore robots.txt, that's just... shitty.

replies(1): >>44332789 #
1. lolinder ◴[] No.44332789[source]
> If your browser behaves, it's not going to be excluded in robots.txt.

No, it's common practice to allow Googlebot and deny all other crawlers by default [0].

This is within their rights when it comes to true scrapers, but it's part of why I'm very uncomfortable with the idea of applying robots.txt to what are clearly user agents. It sets a precedent where it's not inconceivable that we have websites curating allowlists of user agents like they already do for scrapers, which would be very bad for the web.

[0] As just one example: https://www.404media.co/google-is-the-only-search-engine-tha...

replies(1): >>44332992 #
2. qualeed ◴[] No.44332992[source]
>clearly user agents

I am not sure I agree with an AI-aided browser, that will scrape sites and aggregate that information, being classified as "clearly" a user agent.

If this browser were to gain traction and ends up being abusive to the web, that's bad too.

Where do you draw the line of crawler vs. automated "user agent"? Is it a certain number of web requests per minute? How are you defining "true scraper"?

replies(1): >>44333070 #
3. lolinder ◴[] No.44333070[source]
I draw the line where robotstxt.org (the semi-official home of robots.txt) draws the line [0]:

> A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

To me "recursive" is key—it transforms the traffic pattern from one that strongly resembles that of a human to one that touches every page on the site, breaks caching by visiting pages humans wouldn't typically, and produces not just a little bit more but orders of magnitude more traffic.

I was persuaded in another subthread that Nxtscape should respect robots.txt if a user issues a recursive request. I don't think it should if the request is "open these 5 subreddits and summarize the most popular links uploaded since yesterday", because the resulting traffic pattern is nearly identical to what I'd have done by hand (especially if the browser implements proper rate limiting, which I believe it should).

[0] https://www.robotstxt.org/faq/what.html