←back to thread

548 points kmelve | 4 comments | | HN request time: 0.083s | source
Show context
swframe2 ◴[] No.45108930[source]
Preventing garbage just requires that you take into account the cognitive limits of the agent. For example ...

1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next.

2) For really complex steps, ask the model to write code to visualize the problem and solution.

3) If the model fails on a given step, ask it to add logging to the code, save the logs, run the tests and the review the logs to determine what went wrong. Do this repeatedly until the step works well.

4) Ask the model to look at your existing code and determine how it was designed to implement a task. Some times the model will put all of the changes in one file but your code has a cleaner design the model doesn't take into account.

I've seen other people blog about their tricks and tips. I do still see garbage results but not as high as 95%.

replies(20): >>45109085 #>>45109229 #>>45109255 #>>45109297 #>>45109350 #>>45109631 #>>45109684 #>>45109710 #>>45109743 #>>45109822 #>>45109969 #>>45110014 #>>45110639 #>>45110707 #>>45110868 #>>45111654 #>>45112029 #>>45112178 #>>45112219 #>>45112752 #
1. adastra22 ◴[] No.45109631[source]
This is why the jobs market for new grads and early career folks has dried up. A seasoned developer knows that this is how you manage work in general, and just treats the AI like they would a junior developer—and gets good results.
replies(1): >>45109725 #
2. CuriouslyC ◴[] No.45109725[source]
Why bother handing stuff to a junior when an agent will do it faster while asking fewer questions, and even if the first draft code isn't amazing, you can just quality gate with an LLM reviewer that has been instructed to be brutal and do a manual pass when the code gets by the LLM reviewer.
replies(1): >>45110061 #
3. LtWorf ◴[] No.45110061[source]
Because juniors learn while LLMs don't and you must explain the same thing over and over forever.
replies(1): >>45110240 #
4. adastra22 ◴[] No.45110240{3}[source]
If you are explaining things more than once, you are doing it wrong. Which is not on you as the tools currently suck big time. But it is quite possible to have LLM agents “learn” by intelligently matching context (including historical lessons learned) to conversation.