←back to thread

170 points anandchowdhary | 2 comments | | HN request time: 0.432s | source

Continuous Claude is a CLI wrapper I made that runs Claude Code in an iterative loop with persistent context, automatically driving a PR-based workflow. Each iteration creates a branch, applies a focused code change, generates a commit, opens a PR via GitHub's CLI, waits for required checks and reviews, merges if green, and records state into a shared notes file.

This avoids the typical stateless one-shot pattern of current coding agents and enables multi-step changes without losing intermediate reasoning, test failures, or partial progress.

The tool is useful for tasks that require many small, serial modifications: increasing test coverage, large refactors, dependency upgrades guided by release notes, or framework migrations.

Blog post about this: https://anandchowdhary.com/blog/2025/running-claude-code-in-...

Show context
apapalns ◴[] No.45957654[source]
> codebase with hundreds of thousands of lines of code and go from 0% to 80%+ coverage in the next few weeks

I had a coworker do this with windsurf + manual driving awhile back and it was an absolute mess. Awful tests that were unmaintainable and next to useless (too much mocking, testing that the code “works the way it was written”, etc.). Writing a useful test suite is one of the most important parts of a codebase and requires careful deliberate thought. Without deep understanding of business logic (which takes time and is often lost after the initial devs move on) you’re not gonna get great tests.

To be fair to AI, we hired a “consultant” that also got us this same level of testing so it’s not like there is a high bar out there. It’s just not the kind of problem you can solve in 2 weeks.

replies(8): >>45957997 #>>45958225 #>>45958365 #>>45958599 #>>45958634 #>>45959634 #>>45968154 #>>45969561 #
LASR ◴[] No.45958599[source]
There is no free lunch. The amount of prompt writing to give the LLM enough context about your codebase etc is comparable to writing the tests yourself.

Code assistance tools might speed up your workflow by maybe 50% or even 100%, but it's not the geometric scaling that is commonly touted as the benefits of autonomous agentic AI.

And this is not a model capability issue that goes away with newer generations. But it's a human input problem.

replies(2): >>45958912 #>>45959635 #
1. anandchowdhary ◴[] No.45958912[source]
I don't know if this is true.

For example, you can spend a few hours writing a really good set of initial tests that cover 10% of your codebase, and another few hours with an AGENTS.md that gives the LLM enough context about the rest of the codebase. But after that, there's a free* lunch because the agent can write all the other tests for you using that initial set and the context.

This also works with "here's how I created the Slack API integration, please create the Teams integration now" because it has enough to learn from, so that's free* too. This kind of pattern recognition means that prompting is O(1) but the model can do O(n) from that (I know, terrible analogy).

*Also literally becomes free as the cost of tokens approaches zero

replies(1): >>45959104 #
2. jaredsohn ◴[] No.45959104[source]
A neat part of this is it mimics how people get onboarded onto codebases. People usually aren't figuring out how to write tests from scratch; they look at the current best practices for similar functionality in the codebase and start there. And then as they continue to work there they try to influence new best practices.