←back to thread

192 points imasl42 | 1 comments | | HN request time: 0s | source
Show context
brap ◴[] No.45312062[source]
My process is basically

1. Give it requirements

2. Tell it to ask me clarifying questions

3. When no more questions, ask it to explain the requirements back to me in a formal PRD

4. I criticize it

5. Tell it to come up with 2 alternative high level designs

6. I pick one and criticize it

7. Tell it to come up with 2 alternative detailed TODO lists

8. I pick one and criticize it

9. Tell it to come up with 2 alternative implementations of one of the TODOs

10. I pick one and criticize it

11. Back to 9

I usually “snapshot” outputs along the way and return to them to reduce useless context.

This is what produces the most decent results for me, which aren’t spectacular but at the very least can be a baseline for my own implementation.

It’s very time consuming and 80% of the time I end up wondering if it would’ve been quicker to just do it all by myself right from the start.

replies(5): >>45312298 #>>45312425 #>>45312811 #>>45312874 #>>45320386 #
1. jwrallie ◴[] No.45312425[source]
I think I’m working at lower levels, but usually my flow is:

- I start to build or refactor the code structure by myself creating the basic interfaces or skip to the next step when they already exist. I’ll use LLMs as autocomplete here.

- I write down the requirements and tell which files are the entry point for the changes.

- I do not tell the agent my final objective, only one step that gets me closer to it, and one at a time.

- I watch carefully and interrupt the agent as soon as I see something going wrong. At this point I either start over if my requirement assumptions were wrong or just correct the course of action of the agent if it was wrong.

Most of the issues I had in the past were from when I write down a broad objective that requires too many steps at the beginning. Agents cannot judge correctly when they finished something.