←back to thread

170 points anandchowdhary | 1 comments | | HN request time: 0.213s | source

Continuous Claude is a CLI wrapper I made that runs Claude Code in an iterative loop with persistent context, automatically driving a PR-based workflow. Each iteration creates a branch, applies a focused code change, generates a commit, opens a PR via GitHub's CLI, waits for required checks and reviews, merges if green, and records state into a shared notes file.

This avoids the typical stateless one-shot pattern of current coding agents and enables multi-step changes without losing intermediate reasoning, test failures, or partial progress.

The tool is useful for tasks that require many small, serial modifications: increasing test coverage, large refactors, dependency upgrades guided by release notes, or framework migrations.

Blog post about this: https://anandchowdhary.com/blog/2025/running-claude-code-in-...

Show context
apapalns ◴[] No.45957654[source]
> codebase with hundreds of thousands of lines of code and go from 0% to 80%+ coverage in the next few weeks

I had a coworker do this with windsurf + manual driving awhile back and it was an absolute mess. Awful tests that were unmaintainable and next to useless (too much mocking, testing that the code “works the way it was written”, etc.). Writing a useful test suite is one of the most important parts of a codebase and requires careful deliberate thought. Without deep understanding of business logic (which takes time and is often lost after the initial devs move on) you’re not gonna get great tests.

To be fair to AI, we hired a “consultant” that also got us this same level of testing so it’s not like there is a high bar out there. It’s just not the kind of problem you can solve in 2 weeks.

replies(8): >>45957997 #>>45958225 #>>45958365 #>>45958599 #>>45958634 #>>45959634 #>>45968154 #>>45969561 #
1. PunchyHamster ◴[] No.45958634[source]
Cleanroom design of "this is a function's interface, it does this and that, write tests for that function to pass" generally can get you pretty decent results.

But "throw vague prompt at AI direction" does about as well as doing same thing with an intern.