←back to thread

469 points ghuntley | 1 comments | | HN request time: 0.23s | source
Show context
faangguyindia ◴[] No.45001426[source]
Anyone can build a coding agent which works on a) fresh code base b) when you've unlimited token budget

now build it for old codebase, let's see how precisely it edits or removes features without breaking the whole codebase

lets see how many tokens it consumes per bug fix or feature addition.

replies(4): >>45001529 #>>45001567 #>>45001784 #>>45001830 #
pcwelder ◴[] No.45001567[source]
Agree. To reduce costs:

1. Precompute frequently used knowledge and surface early. For example repository structure, os information, system time.

2. Anticipate next tool calls. If a match is not found while editing, instead of simply failing, return closest matching snippet. If read file tool gets a directory, return directory contents.

3. Parallel tool calls. Claude needs either a batch tool or special scaffolding to promote parallel tool calls. Single tool call per turn is very expensive.

Are there any other such general ideas?

replies(1): >>45001667 #
1. faangguyindia ◴[] No.45001667[source]
that info can be just included in preffix which is cache by LLM, reducing cost by 70-80% average. System time varies, so it's not good idea to specify it in prompt, better to make a function out of it to avoid cache invalidation.

I am still looking for a good "memory" solution, so far running without it. Haven't looked too deep into it.

Not sure how next tool call be predicted.

I am still using serial tool calls as i do not have any subagents, i just use fast inference models for directly tools calls. It works so fast, i doubt i'll benefit from parallel anything.