←back to thread

75 points throwaway-ai-qs | 1 comments | | HN request time: 0.263s | source

Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

1. cadamsdotcom ◴[] No.45281161[source]
Make your agent do TDD.

Claude struggles with writing a test that’s meant to fail but it can be coaxed into doing it on the second or third attempt. Luckily it does not struggle with me insisting the failure be for the right reason. (As opposed to failing because of a setup issue or a problem elsewhere in the code)

When doing TDD with Claude Code I lean heavily on asking the agent two things: “can we watch it fail” and “does it fail for the right reason”. These questions are generic enough to sleepwalk through building most features and fixing all bugs. Yes I said all bugs.

Reviewing the code is very pleasant because you get both the tests and production code and you can rely on the symmetry between them to understand the code’s intent and confirm that it does what it says.

In my experience over multiple months of greenfield and brownfield work, Claude doing TDD produces code that is 100% the quality and clarity I’d have achieved had I built the thing myself, and it does it 100% of the time. Big part of that is because TDD compartmentalizes each task making it easy to avoid a single task having too much complexity.