←back to thread

219 points crazylogger | 1 comments | | HN request time: 0.216s | source
Show context
xianshou ◴[] No.42728570[source]
One trend I've noticed, framed as a logical deduction:

1. Coding assistants based on o1 and Sonnet are pretty great at coding with <50k context, but degrade rapidly beyond that.

2. Coding agents do massively better when they have a test-driven reward signal.

3. If a problem can be framed in a way that a coding agent can solve, that speeds up development at least 10x from the base case of human + assistant.

4. From (1)-(3), if you can get all the necessary context into 50k tokens and measure progress via tests, you can speed up development by 10x.

5. Therefore all new development should be microservices written from scratch and interacting via cleanly defined APIs.

Sure enough, I see HN projects evolving in that direction.

replies(12): >>42729039 #>>42729413 #>>42729713 #>>42729788 #>>42730016 #>>42730842 #>>42731468 #>>42733881 #>>42735489 #>>42736464 #>>42740025 #>>42747244 #
1. phaedrus ◴[] No.42731468[source]
50K context is an interesting number because I think there's a lot to explore with software within an order of magnitude that size. With apologies to Richard Feynman, I call it, "There's plenty of room in the middle." My idea there is the rapid expansion of computing power during the reign of Moore's law left the design space of "medium sized" programs under-explored. These would be programs in the range of 100's of kilobytes to low megabytes.