←back to thread

314 points felarof | 1 comments | | HN request time: 0.217s | source

Hi HN - we're Nithin and Nikhil, twin brothers and founders of nxtscape.ai (YC S24). We're building Nxtscape ("next-scape") - an open-source, agentic browser for the AI era.

-- Why bother building a new browser? For the first time since Netscape was released in 1994, it feels like we can reimagine browsers from scratch for the age of AI agents. The web browser of tomorrow might not look like what we have today.

We saw how tools like Cursor gave developers a 10x productivity boost, yet the browser—where everyone else spends their entire workday—hasn't fundamentally changed.

And honestly, we feel like we're constantly fighting the browser we use every day. It's not one big thing, but a series of small, constant frustrations. I'll have 70+ tabs open from three different projects and completely lose my train of thought. And simple stuff like reordering tide pods from amazon or filling out forms shouldn't need our full attention anymore. AI can handle all of this, and that's exactly what we're building.

Here’s a demo of our early version https://dub.sh/nxtscape-demo

-- What makes us different We know others are exploring this space (Perplexity, Dia), but we want to build something open-source and community-driven. We're not a search or ads company, so we can focus on being privacy-first – Ollama integration, BYOK (Bring Your Own Keys), ad-blocker.

Btw we love what Brave started and stood for, but they've now spread themselves too thin across crypto, search, etc. We are laser-focused on one thing: making browsers work for YOU with AI. And unlike Arc (which we loved too but got abandoned), we're 100% open source. Fork us if you don't like our direction.

-- Our journey hacking a new browser To build this, we had to fork Chromium. Honestly, it feels like the only viable path today—we've seen others like Brave (started with electron) and Microsoft Edge learn this the hard way.

We also started with why not just build an extension. But realized we needed more control. Similar to the reason why Cursor forked VSCode. For example, Chrome has this thing called the Accessibility Tree - basically a cleaner, semantic version of the DOM that screen readers use. Perfect for AI agents to understand pages, but you can't use it through extension APIs.

That said, working with the 15M-line C++ chromium codebase has been an adventure. We've both worked on infra at Google and Meta, but Chromium is a different beast. Tools like Cursor's indexing completely break at this scale, so we've had to get really good with grep and vim. And the build times are brutal—even with our maxed-out M4 Max MacBook, a full build takes about 3 hours.

Full disclosure: we are still very early, but we have a working prototype on GitHub. It includes an early version of a "local Manus" style agent that can automate simple web tasks, plus an AI sidebar for questions, and other productivity features (grouping tabs, saving/resuming sessions, etc.).

Looking forward to any and all comments!

You can download the browser from our github page: https://github.com/nxtscape/nxtscape

Show context
kevinsync ◴[] No.44332197[source]
IMO comments so far seem to be not seeing the forest for the trees -- I can imagine incredible value for myself in a browser that hooks into a local LLM, writes everything it sees to a local timestamped database (oversimplification), parses and summarizes everything you interact with (again, oversimplification -- this would be tunable and scriptable), exposes Puppeteer-like functionality that is both scriptable via code and prompt-to-generate-code, helps you map shit out, remember stuff, find forgotten things that are "on the tip of your [digital] tongue", learn what you're interested in (again, local), help proactively filter ads, spam, phishing, bullshit you don't want to see, etc, can be wound up and let go to tackle internet tasks autonomously for (and WITH) you (oversimplification), on and on and on.

Bookmarks don't cut it anymore when you've got 25 years of them saved.

Falling down deep rabbit holes because you landed on an attention-desperate website to check one single thing and immediately got distracted can be reduced by running a bodyguard bot to filter junk out. Those sites create deafening noise that you can squash by telling the bot to just let you know when somebody replies to your comment with something of substance that you might actually want to read.

If it truly works, I can imagine the digital equivalent of a personal assistant + tour manager + doorman + bodyguard + housekeeper + mechanic + etc, that could all be turned off and on with a switch.

Given that the browser is our main portal to the chaos that is internet in 2025, this is not a bad idea! Really depends on the execution, but yeah.. I'm very curious to see how this project (and projects like it) go.

replies(6): >>44332584 #>>44333971 #>>44334220 #>>44334902 #>>44336464 #>>44340814 #
alisonatwork ◴[] No.44333971[source]
This is basically what Microsoft wants to do with Recall and they got slammed for it. Which drives me nuts because it's the only feature from the recent AI hype wave that excites me because it's the only thing so far that sounds like it will actually make my life better. But then I thought about it a bit more and I realized that what I really want is not AI, I just want my computer to have a detailed local history and search functionality.

My computer should remember everything I did on it, period. It should remember every website I visited, exactly how far down I scrolled on each page, every thought I typed and subsequently deleted before posting... And it should have total recall! I should be able to rewind back to any point in time and track exactly what happened, because it's a computer. I already have a lossy memory of stuff that happened yesterday and that's inside my head. The whole point of having my computer remember stuff for me is that it's supposed to do it better than me.

And I want the search to be deterministic. I want to be able to input precise timestamps and include boolean operators. Yes, it would be helpful to have fuzzy matches, recommendations and a natural language processing layer too, but Lucene et al already did that acceptably well for local datasets 20+ years ago. It's great we have a common corpus, but I don't care about getting tokenized prose from the corpus, I care about the stuff I did on my own computer!

From my perspective LLMs don't bring much value on the personalized search front. The way I understand it, the nature of their encoding makes it impossible to get back the data you were actually looking for unless that data was also stored and indexed the traditional way, in which case you could have just skipped the layer of indirection and queried the source data in the first place.

I am also curious to see how all of this develops. I get a sense that the current trend of injecting LLMs everywhere is a temporary stop-gap measure used to give people the illusion of a computer that knows everything because researchers haven't yet figured out how to actually index "everything" in a performant way. But for the use case of personalized search, the computer doesn't actually need to know "everything", it only needs to know about text that was visible on-screen, plus a bit of metadata (time period, cursor position, clipboard, URL etc). If we currently still need an LLM to index that because snapshotting the actual text and throwing it into a traditional index requires too much disk space, okay, but then what's next? Because just being able to have a vague conversation about a thing I kindasorta maybe was doing yesterday is not it. Total recall is it.

replies(4): >>44334322 #>>44334718 #>>44345492 #>>44357389 #
1. immibis ◴[] No.44357389[source]
There's the whole privacy issue. We know every software company exfiltrates as much data as they can get away with, and we know the US government has access to all that data. If Recall is good, then ICE conc-camping individuals based on their search history is good, because that's what Recall will do.