←back to thread

257 points danenania | 2 comments | | HN request time: 0.001s | source

Hey HN! I’m Dane, the creator of Plandex (https://github.com/plandex-ai/plandex), an open source AI coding agent focused especially on tackling large tasks in real world software projects.

You can watch a 2 minute demo of Plandex in action here: https://www.youtube.com/watch?v=SFSu2vNmlLk

And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: https://www.youtube.com/watch?v=g-_76U_nK0Y.

I launched Plandex v1 here on HN a little less than a year ago (https://news.ycombinator.com/item?id=39918500).

Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.

In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.

I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.

A bit more on some of Plandex’s key features:

- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.

- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.

- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.

- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.

- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.

- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.

I’d love to get more HNers in the Plandex Discord (https://discord.gg/plandex-ai). Please join and say hi!

And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!

Show context
killerstorm ◴[] No.43719255[source]
I like the idea but it did not quite work out of box.

There was some issue with sign-in, it seems pin requested via web does not work in console (so the web suggesting using --pin option is misleading).

I tried BYO plan as I already have openrouter API key. But it seems like default model pack splits its API use between openrouter and openai, and I ended up stuck with "o3-mini does not exist".

And my whole motivation was basically trying Gemini 2.5 Pro it seems like that requires some trial-and-error configuration. (gemini-exp pack doesn't quite work now.)

The difference between FOSS and BYO plan is not clear: seems like installation process is different, but is the benefit of paid plan that it would store my stuff on server? I'd really rather not TBH, so it has negative value.

replies(3): >>43719459 #>>43719519 #>>43719819 #
danenania ◴[] No.43719819[source]
Thanks for trying it!

Could you explain in a bit more detail what went wrong for you with sign-in and the pin? Did you get an error message?

On OpenRouter vs. OpenAI, see my other comment in this thread (https://news.ycombinator.com/item?id=43719681). I'll try to make this smoother.

On Gemini 2.5 Pro: the new paid 2.5 pro preview will be added soon, which will address this. The free OpenRouter 2.5 pro experimental model is hit or miss because it uses OpenRouter's quota with Google. So if it's getting used heavily by other OpenRouter users, it can end up being exhausted for all users.

On the cloud BYO plan, I'd say the main benefits are:

- Truly zero dependency (no need for docker, docker-compose, and git).

- Easy to access your plans on multiple devices.

- File edits are significantly faster and cheaper, and a bit more reliable, thanks to a custom fast apply model.

- There are some foundations in place for organizations/teams, in case you might want to collaborate on a plan or share plans with others, but that's more of a 'coming soon' for now.

If you use the 'Integrated Models' option (rather than BYO), there are also some useful billing and spend management features.

But if you don't find any of those things valuable, then the FOSS could be the best choice for you.

replies(1): >>43720525 #
killerstorm ◴[] No.43720525[source]
When I used `--pin` argument I got an error message along the lines of "not found in the table".

I got it working by switching to oss model pack and specifying G2.5P on top. Also works with anthropic pack.

But I'm quite disappointed with UX - there's a lot of configuration options but robustness is severely lacking.

Oddly, in the default mode out of box it does not want to discuss the plan with me but just jumps to implementation.

And when it's done writing code it aggressively wants me to decide whether to apply -- there's no option to discuss changes, rewind back to planning, etc. Just "APPLY OR REJECT!!!". Even Ctrl-C does not work! Not what I expected from software focused on planning...

replies(1): >>43723039 #
danenania ◴[] No.43723039[source]
Thanks, I appreciate the feedback.

> Oddly, in the default mode out of box it does not want to discuss the plan with me but just jumps to implementation.

It should be starting you out in "chat mode". Do you mean that you're prompted to begin implementation at the end of the chat response? You can just choose the 'no' option if that's the case and keep chatting.

Once you're in 'tell mode', you can always switch back to chat mode with the '\chat' command if you don't want anything to be implemented.

> And when it's done writing code it aggressively wants me to decide whether to apply -- there's no option to discuss changes, rewind back to planning, etc. Just "APPLY OR REJECT!!!". Even Ctrl-C does not work! Not what I expected from software focused on planning...

This is just a menu to make the commands you're most likely to need after a set of changes is finished. If you press 'enter', you'll return back to the repl prompt where you can discuss the changes (switch back to chat mode with \chat if you only want to discuss, rather than iterate), or use commands (like \rewind) as needed.

replies(1): >>43724329 #
1. killerstorm ◴[] No.43724329[source]
Here's what happened:

  1.  It started formulating the plan
 2.  Got error from provider (it seems model set sometime randomly resets to default?!?)
 3.  After I switched to different provider, I want it to continue planning, so I use \continue command
 4.  But when it gets \continue command it starts writing code without asking anything!
 5.  In the end it was still in chat mode. I never switched to tell mode, I just wanted it to keep planning.
Here's an excerpt: https://gist.github.com/killerstorm/ad8afa19b2f55588eb317138...

It went from entry 3 "Made Plan" to entry 4 and so on without any input from my end.

I could not reproduce the second issue this time: I didn't get the same menu and it was more chill.

replies(1): >>43729025 #
2. danenania ◴[] No.43729025[source]
I see, it sounds like \continue is the issue—this command is designed to continue with implementation rather than with a chat, so it switches you into 'tell mode'. I'll try to make that clearer, or to make it better handle chat mode. I can definitely see how it would be confusing.

The model pack shouldn't be resetting, but a potential gotcha is that model settings are version controlled, so if you rewind to a point in the plan before the model settings were changed, you can undo those changes. Any chance that's what happened? It's a bit of a tradeoff since having those settings version controlled can also be useful in various ways.

This feedback is very valuable, so thanks again!