←back to thread

257 points danenania | 4 comments | | HN request time: 1.028s | source

Hey HN! I’m Dane, the creator of Plandex (https://github.com/plandex-ai/plandex), an open source AI coding agent focused especially on tackling large tasks in real world software projects.

You can watch a 2 minute demo of Plandex in action here: https://www.youtube.com/watch?v=SFSu2vNmlLk

And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: https://www.youtube.com/watch?v=g-_76U_nK0Y.

I launched Plandex v1 here on HN a little less than a year ago (https://news.ycombinator.com/item?id=39918500).

Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.

In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.

I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.

A bit more on some of Plandex’s key features:

- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.

- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.

- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.

- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.

- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.

- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.

I’d love to get more HNers in the Plandex Discord (https://discord.gg/plandex-ai). Please join and say hi!

And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!

Show context
killerstorm ◴[] No.43719255[source]
I like the idea but it did not quite work out of box.

There was some issue with sign-in, it seems pin requested via web does not work in console (so the web suggesting using --pin option is misleading).

I tried BYO plan as I already have openrouter API key. But it seems like default model pack splits its API use between openrouter and openai, and I ended up stuck with "o3-mini does not exist".

And my whole motivation was basically trying Gemini 2.5 Pro it seems like that requires some trial-and-error configuration. (gemini-exp pack doesn't quite work now.)

The difference between FOSS and BYO plan is not clear: seems like installation process is different, but is the benefit of paid plan that it would store my stuff on server? I'd really rather not TBH, so it has negative value.

replies(3): >>43719459 #>>43719519 #>>43719819 #
throwup238 ◴[] No.43719519[source]
The installation process for the FOSS version includes both the CLI (which is also used for the cloud version) and a docker-compose file for the server components. Last time I tried it (v1) it was quite clunky but yesterday with v2 it was quite a bit easier, with an explicit localhost option when using plandex login.
replies(1): >>43719701 #
danenania ◴[] No.43719701[source]
I'm glad to hear it went smoothly for you. It was definitely clunky in v1.
replies(1): >>43720251 #
1. throwup238 ◴[] No.43720251[source]
I would get rid of the email validation code for localhost, though. That remains the biggest annoyance when running it locally as a single user. I would also add a $@ to the docker-compose call in the bash start script so users can start it in detached mode.
replies(1): >>43720387 #
2. danenania ◴[] No.43720387[source]
It should already be skipping the email validation step in local mode. Is it showing up for you?

I’ll look into the detached mode, thanks!

replies(1): >>43720520 #
3. throwup238 ◴[] No.43720520[source]
Yes, it showed up for me, luckily I had the logs open and remembered that was the solution in v1 (it wasn’t documented back then iirc). I git pulled in the same directory I ran v1 in so maybe there’s some sort of left over config or something?
replies(1): >>43723058 #
4. danenania ◴[] No.43723058{3}[source]
Email pins are disabled based on a LOCAL_MODE environment variable, which is set in the docker-compose config. I'll take a look.