←back to thread

10 points kayba | 1 comments | | HN request time: 0s | source

We implemented Stanford's Agentic Context Engineering paper which shows agents can improve their performance just by evolving their own context.

How it works: Agents execute tasks, reflect on what worked/failed, and curate a "playbook" of strategies. All from execution feedback - no training data needed.

Happy to answer questions about the implementation or the research!

Show context
jimmySixDOF ◴[] No.45640792[source]
this kind of DSpy-GEPA self improvement loop keeps popping up and adding a few points but the cost (API and wall clock)also means you use this where a repeatable task/prompt/context needs optimizing and you can afford to find better templates
replies(1): >>45656218 #
1. kayba ◴[] No.45656218[source]
You're right that cost and latency are important considerations. However, the research shows this isn't just about finding better templates, it's about enabling agentic systems to learn and improve from their previous attempts and failures.

We believe in-context learning is one of the missing pieces to make agentic systems feasible in production. The key is that systems can adapt without expensive fine-tuning or retraining. The paper shows *86.9% lower adaptation latency* and significant reductions in rollout costs compared to existing methods, making this approach more practical than previous optimization techniques.

The real value is in systems that progressively get better at their tasks through experience, not just one-time prompt optimization.

If you want to continue this conversation just hit me up on Discord: https://discord.com/invite/mqCqH7sTyK