←back to thread

219 points crazylogger | 1 comments | | HN request time: 0.216s | source
Show context
xianshou ◴[] No.42728570[source]
One trend I've noticed, framed as a logical deduction:

1. Coding assistants based on o1 and Sonnet are pretty great at coding with <50k context, but degrade rapidly beyond that.

2. Coding agents do massively better when they have a test-driven reward signal.

3. If a problem can be framed in a way that a coding agent can solve, that speeds up development at least 10x from the base case of human + assistant.

4. From (1)-(3), if you can get all the necessary context into 50k tokens and measure progress via tests, you can speed up development by 10x.

5. Therefore all new development should be microservices written from scratch and interacting via cleanly defined APIs.

Sure enough, I see HN projects evolving in that direction.

replies(12): >>42729039 #>>42729413 #>>42729713 #>>42729788 #>>42730016 #>>42730842 #>>42731468 #>>42733881 #>>42735489 #>>42736464 #>>42740025 #>>42747244 #
Arcuru ◴[] No.42729039[source]
> 5. Therefore all new development should be microservices written from scratch and interacting via cleanly defined APIs.

Not necessarily. You can get the same benefits you described in (1)-(3) by using clearly defined modules in your codebase, they don't need to be separate microservices.

replies(4): >>42729689 #>>42737545 #>>42738887 #>>42744790 #
1. ben_w ◴[] No.42744790[source]
Indeed; I think there's a strong possibility that there's certain architectural choices where LLMs can do very well, and others where they would struggle.

There are with humans, but it's inconsistent; personally I really dislike VIPER, yet I've never felt the pain others insist comes with too much in a ViewController.