←back to thread

257 points danenania | 4 comments | | HN request time: 1.135s | source

Hey HN! I’m Dane, the creator of Plandex (https://github.com/plandex-ai/plandex), an open source AI coding agent focused especially on tackling large tasks in real world software projects.

You can watch a 2 minute demo of Plandex in action here: https://www.youtube.com/watch?v=SFSu2vNmlLk

And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: https://www.youtube.com/watch?v=g-_76U_nK0Y.

I launched Plandex v1 here on HN a little less than a year ago (https://news.ycombinator.com/item?id=39918500).

Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.

In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.

I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.

A bit more on some of Plandex’s key features:

- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.

- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.

- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.

- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.

- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.

- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.

I’d love to get more HNers in the Plandex Discord (https://discord.gg/plandex-ai). Please join and say hi!

And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!

Show context
jtwaleson ◴[] No.43712909[source]
Nice! I tried it out when you launched last year but found it pretty expensive to use. I believe I spent $5 for half an hour of coding or so. Can you share what the typical costs are now, since the model prices have changed significantly?
replies(1): >>43718131 #
1. danenania ◴[] No.43718131[source]
It's a bit hard to give "typical" costs because it's so dependent on how you use it. The project map size (which scales with overall project size) and the number/size of relevant files are the main drivers of cost, so working in large existing codebases will be a lot more expensive than generating a new app from scratch.

Taking Plandex's codebase as an example, it's certainly not huge but is getting to be decent-sized—I just ran a count and it's at about 200k lines (mostly Go), which translates to a project map of ~43k tokens. I have a task in progress right now to add a json config file for model settings and other project settings. To get to a pretty good initial version of this feature, I first did a fair amount of back-and-forth in 'chat mode' to pin down the details (maybe 10 or so prompts) and then an implementation phase where ~15 files were updated. The cost so far is at a little under $10.

replies(2): >>43719602 #>>43722733 #
2. jtwaleson ◴[] No.43719602[source]
Thanks! Quite a bit more money than Cursor (probably better quality, as Cursor's context is limited) but still peanuts compared to hiring someone :)
3. dr_kiszonka ◴[] No.43722733[source]
Hi. Nice product!

Let's say I have a repo for an NLP project. One directory contains a few thousand text files. Can I tell Plandex to never ever index and access them? For my use case, I wish projects in this space always asked me before accessing anything. Claude recently decided to install seven Python packages and grabbed full terminal output following installation, which turned out pretty expensive (and useless).

replies(1): >>43722998 #
4. danenania ◴[] No.43722998[source]
Hi, thanks! Yes, you could either:

- Add that directory to either .gitignore (in a git repo) or a .plandexignore file (which uses gitignore syntax).

- You can switch to a mode where context is not loaded automatically and you choose the files yourself instead (more on this here: https://docs.plandex.ai/core-concepts/autonomy).