←back to thread

170 points anandchowdhary | 1 comments | | HN request time: 0.215s | source

Continuous Claude is a CLI wrapper I made that runs Claude Code in an iterative loop with persistent context, automatically driving a PR-based workflow. Each iteration creates a branch, applies a focused code change, generates a commit, opens a PR via GitHub's CLI, waits for required checks and reviews, merges if green, and records state into a shared notes file.

This avoids the typical stateless one-shot pattern of current coding agents and enables multi-step changes without losing intermediate reasoning, test failures, or partial progress.

The tool is useful for tasks that require many small, serial modifications: increasing test coverage, large refactors, dependency upgrades guided by release notes, or framework migrations.

Blog post about this: https://anandchowdhary.com/blog/2025/running-claude-code-in-...

Show context
apapalns ◴[] No.45957654[source]
> codebase with hundreds of thousands of lines of code and go from 0% to 80%+ coverage in the next few weeks

I had a coworker do this with windsurf + manual driving awhile back and it was an absolute mess. Awful tests that were unmaintainable and next to useless (too much mocking, testing that the code “works the way it was written”, etc.). Writing a useful test suite is one of the most important parts of a codebase and requires careful deliberate thought. Without deep understanding of business logic (which takes time and is often lost after the initial devs move on) you’re not gonna get great tests.

To be fair to AI, we hired a “consultant” that also got us this same level of testing so it’s not like there is a high bar out there. It’s just not the kind of problem you can solve in 2 weeks.

replies(8): >>45957997 #>>45958225 #>>45958365 #>>45958599 #>>45958634 #>>45959634 #>>45968154 #>>45969561 #
LASR ◴[] No.45958599[source]
There is no free lunch. The amount of prompt writing to give the LLM enough context about your codebase etc is comparable to writing the tests yourself.

Code assistance tools might speed up your workflow by maybe 50% or even 100%, but it's not the geometric scaling that is commonly touted as the benefits of autonomous agentic AI.

And this is not a model capability issue that goes away with newer generations. But it's a human input problem.

replies(2): >>45958912 #>>45959635 #
1. nl ◴[] No.45959635[source]
It depends on the problem domain.

I recently had a bunch of Claude credits so got it to write a language implementation for me. It probably took 4 hours of my time, but judging by other implementations online I'd say the average implementation time is hundreds of hours.

The fact that the model knew the language and there are existing tests I could use is a radical difference.