←back to thread

548 points kmelve | 1 comments | | HN request time: 0.251s | source
Show context
swframe2 ◴[] No.45108930[source]
Preventing garbage just requires that you take into account the cognitive limits of the agent. For example ...

1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next.

2) For really complex steps, ask the model to write code to visualize the problem and solution.

3) If the model fails on a given step, ask it to add logging to the code, save the logs, run the tests and the review the logs to determine what went wrong. Do this repeatedly until the step works well.

4) Ask the model to look at your existing code and determine how it was designed to implement a task. Some times the model will put all of the changes in one file but your code has a cleaner design the model doesn't take into account.

I've seen other people blog about their tricks and tips. I do still see garbage results but not as high as 95%.

replies(20): >>45109085 #>>45109229 #>>45109255 #>>45109297 #>>45109350 #>>45109631 #>>45109684 #>>45109710 #>>45109743 #>>45109822 #>>45109969 #>>45110014 #>>45110639 #>>45110707 #>>45110868 #>>45111654 #>>45112029 #>>45112178 #>>45112219 #>>45112752 #
jason_zig ◴[] No.45109085[source]
I've seen people post this same advice and I agree with you that it works but you would think they would absorb this common strategy and integrate it as part of the underlying product at this point...
replies(3): >>45109237 #>>45109244 #>>45109861 #
noosphr ◴[] No.45109244[source]
The people who build the models don't understand how to use the models. It's like asking people who design CPUs to build data-centers.

I've interviewed with three tier one AI labs and _no-one_ I talked to had any idea where the business value of their models came in.

Meanwhile Chinese labs are releasing open source models that do what you need. At this point I've build local agentic tools that are better than anything Claude and OAI have as paid offerings, including the $2,000 tier.

Of course they cost between a few dollars to a few hundred dollars per query so until hardware gets better they will stay happily behind corporate moats and be used by the people blessed to burn money like paper.

replies(2): >>45109720 #>>45110581 #
criemen ◴[] No.45110581[source]
> The people who build the models don't understand how to use the models. It's like asking people who design CPUs to build data-centers.

This doesn't match the sentiment on hackernews and elsewhere that claude code is the superior agentic coding tool, as it's developed by one of the AI labs, instead of a developer tool company.

replies(1): >>45110787 #
noosphr ◴[] No.45110787[source]
Claude code is babies first agentic tool.

You don't see better ones from code tooling companies because the economics don't work out. No one is going to pay $1,000 for a two line change on a 500,000k line code base after waiting four hours.

LLMs today the equivalent of a 4bit ALU without memory being sold as a fully functional personal computer. And like ALUs today, you will need _thousands_ of LLMs to get anything useful done, also like ALUs in 1950 we're a long way off from a personal computer being possible.

replies(1): >>45121311 #
1. fragmede ◴[] No.45121311[source]
That's $500k/yr, and I guarantee there's a non-zero amount of humans out there doing exactly that and getting paid that much, because of course we know that lines of code is a dumbass metric and the problem with large mature codebases is that because they're so large and mature, making changes is very difficult, especially when trying to fix hairy customer bugs in code that has a lot of interactions.