←back to thread

97 points jay-baleine | 3 comments | | HN request time: 0.208s | source
1. mehdibl ◴[] No.45149306[source]
The most important you need always to do:

1. Plan, review the plan.

2. Review the code during changes before even it finish and fix ASAP you see drift.

3. Then again review

4. Add tests & use all quality tools don't rely 100% on LLM.

5. Don't trust LLM reviews for own produced code as it's very biased.

This is basic steps that you do as you like.

Avoid FULL AUTOMATED AGENT pipeline where you review the code only at the end unless it's very small task.

replies(2): >>45149757 #>>45158720 #
2. CuriouslyC ◴[] No.45149757[source]
LLMs can review their own code, but you must have a fresh context (so they don't know they wrote it) and you need to instruct them to be very strict. Also, some models are better at code review than others, Gemini/GPT5 are very good at it as long as you give them sufficient codebase context, Claude is not so great here.
3. lukaslalinsky ◴[] No.45158720[source]
Even before you plan, you need to feed it enough relevant context, to make sure the plan is not based on hallucinations it assumes about the system. The best approach I found:

1. Make it read relevant pieces of code and explain it to me

2. Explain my problem and ask it to come up with a plan, iterate if needed

3. Allow it to execute the plan, watch it do it, interrupt and correct when needed

4. Have it do code review using a sub agent, focusing on correctness, avoiding leftover code, etc

5. Then I review it myself