- Better structured codebases - we need hierarchical codebases with minimal depth, maximal orthogonality and reasonable width. Think microservices.
- Better documentation - most code documentations are not built to handle updates. We need a proper graph structure with few sources of truth that get propagated downstream. Again, some optimal sort of hierarchy is crucial here.
At this point, I really don't think that we necessarily need better agents.
Setup your codebase optimally, spin up 5-10 instances of gpt-5-codex-high for each issue/feature/refactor (pick the best according to some criteria) and your life will go smoothly
We summarize context and remember summarizations of it.
Maybe we need to do this with the LLM. Chain of thought sort of does this but it’s not deliberate. The system prompt needs to mark this as a deliberate task of building summaries and notes notes of the entire code base and this summarized context of the code base with gotchas and aspects of it can be part of permanent context the same way ChatGPT remembers aspects of you.
The summaries can even be sectioned off and and have different levels of access. So if the LLM wants to drill down to a subfolder it looks at the general summary and then it looks at another summary for the sub folder. It doesn’t need to access the full summary for context.
Imagine a hierarchy of system notes and summaries. The LLM decides where to go and what code to read while having specific access to notes it left previously when going through the code. Like the code itself it never reads it all it just access sections of summaries that go along with the code. It’s sort of like code comments.
We also need to program it to change the notes every time it changes the program. And when you change the program without consulting AI, every commit you do the AI also needs to update the notes based off of your changes.
The LLM needs a system prompt that tells it to act like us and remember things like us. We do not memorize and examine full context of anything when we dive into code.
I think this might be a good leap for agents, the ability to not just review a doc in it's current state, but to keep in context/understanding the full evolution of a document.
I'll stop ya right there. Spending the past few weeks fixing bugs in a big multi-tier app (which is what any production software is this days). My output per bug is always one commit, often one line.
Claude is an occasional help, nothing more. Certainly not generating the commit for me!
I gave up building agents as soon as I figured they would never scale beyond context constraint. Increase in memory and compute costs to grow the context size of these things isn't linear.
Like, how feasible is it for a mid-size corporation to use a technique like LoRA, mentioned by GP, to "teach" (say, for example) Kimi K2 about a large C++ codebase so that individual engineers don't need to learn the black art of "context engineering" and can just ask it questions.
Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.
This is something humans dont actually do. We aren’t aware of every change and we don’t have updated documentation of every change so the LLM will be doing better in this regard.
A good senior engineer has a ton in their head after 6+ months in a codebase. You can spend a lot of time trying to equip Claude Code with the equivalent in the form of CLAUDE.MD, references to docs, etc., but it's a lot of work, and it's not clear that the agents even use it well (yet).
We do take notes, we summarize our writings, that's a process. But the brain does not follow that primitive process to "scale".
Coding agents choke on our big C++ code-base pretty spectacularly if asked to reference large files.
Microservices should already be a last resort when you’ve either: a) hit technical scale that necessitates it b) hit organizational complexity that necessitates it
Opting to introduce them sooner will almost certainly increase the complexity of your codebase prematurely (already a hallmark of LLM development).
> Better documentation
If this means reasoning as to why decisions are made then yes. If this means explaining the code then no - code is the best documentation. English is nowhere near as good at describing how to interface with computers.
Given how long gpt codex 5 has been out, there’s no way you’ve followed these practices for a reasonable enough time to consider them definitive (2 years at the least, likely much longer).
This was my journey: I vibe-coded an Electron app and ended up with a terrible monolithic architecture, and mostly badly written code. Then, I took the app's architecture docs and spent a lot of my time shouting "MAKE THIS ARCHITECTURE MORE ORTHOGONAL, SOLID, KISS, DRY" to gpt-5-pro, and ended up with a 1500+ liner monster doc.
I'm now turning this into a Tauri app and following the new architecture to a T. I would say that it is has a pretty clean structure with multiple microservices.
Now, new features are gated based on the architecture doc, so I'm always maintaining a single source of truth that serves as the main context for any new discussions/features. Also, each microservice has its own README file(s) which are updated with each code change.
"Its not the x value that's the problem, its the y value".
You're right, it's not "raw intelligence" that's the bottleneck, because there's none of that in there. The truth is no tweak to any parameter is ever going to make the LLM capable of programming. Just like an exponential curve is always going to outgrow a linear one. You can't tweak the parameters out of that fundamental truth.
Did you read the entirety of what I wrote? Please read.
Say the AI left a 5 line summary of a 300 line piece of code. You as a human update that code. What I am saying specifically is this: when you do the change, The AI then sees this and updates the summary. So AI needs to be interacting with every code change whether or not you used it to vibe code.
The next time the AI needs to know what this function does, it doesn’t need to read the entire 300 line function. It reads the 5 line summary, puts it in the context window and moves on with chain of thought. Understand?
This is what shrinks the context. Humans don’t have unlimited context either. We have vague fuzzy memories of aspects of the code and these “notes” effectively make coding agents do the same thing.
Claude is able to create entire PRs for me that are clean, well written, and maintainable.
Can it fail spectacularly? Yes, and it does sometimes. Can it be given good instructions and produce results that feel like magic? Also yes.
Agreed, but how else are you going to scale mostly AI written code? Relying mostly on AI agents gives you that organizational complexity.
> Given how long gpt codex 5 has been out, there’s no way you’ve followed these practices for a reasonable enough time to consider them definitive
Yeah, fair. Codex has been out for less than 2 weeks at this point. I was relying on gpt-5 in August and opus before that.
People have tried to expand context windows by reducing the O(n^2) attention mechanism to something more sparse and it tends to perform very poorly. It will take a fundamental architectural change.
I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.
But in my experience a microservide architecture is orders of magnitud more complex to build and understand that a monolith.
If you, with the help of an LLM, strugle to keep a monolith organized, I am positive you will find even harder to build microservices.
Good luck in your journey, I hope you learn a ton!
You have to be willing to accept "close-ish and good enough" to what you'd write yourself. I would say that most of the time I spend with Claude is to get from its initial try to "close-ish and good enough". If I was working on tiny changes of just a few lines, it would definitely be faster just to write them myself. It's the hundreds of lines of boilerplate, logging, error handling, etc. that makes the trade-off close to worth it.
The context pipeline is a major problem in other fields as well, not just programming. In healthcare, the next billion-dollar startup will likely be the one that cracks the personal health pipeline, enabling people to chat with GPT-6 PRO while seamlessly bringing their entire lifetime of health context into every conversation.
why would you bother with all these summaries if you can just read and remember the code perfectly.
Because you will always need a specialist to drive these tools. You need someone who understands the landscape of software - what's possible, what's not possible, how to select and evaluate the right approach to solve a problem, how to turn messy human needs into unambiguous requirements, how to verify that the produced software actually works.
Provided software developers can grow their field of experience to cover QA and aspects of product management - and learn to effectively use this new breed of coding agents - they'll be just fine.
You load the thing up with relevant context and pray that it guides the generation path to the part of the model that represents the information you want and pray that the path of tokens through the model outputs what you want
That's why they have a tendency to go ahead and do things you tell them not to do..
also IDK about you but I hate how much praying has become part of the state of the art here. I didn't get into this career to be a fucking tech priest for the machine god. I will never like these models until they are predictable, which means I will never like them.
I started writing a solution, but to be honest I probably need the help of someone who's more experienced.
Although to be honest, I'm sure someone with VC money is already working on this.
The vibe coded main invoice generator script then does the calendar calculations to figure out the pay cycle and examines existing invoices in the invoice directory to determine the next invoice number (the invoice number is in the file name, so it doesn't need to open the files). When it is done with the calculations, it uses the template command to generate the final invoice.
This is a very small example, but I do think that clearly defined modules/microservices/libraries are a good way to only put the relevant work context into the limited context window.
It also happens to be more human-friendly, I think?
If you want your agent to be really good at working with dates in a functional way or know how to deal with the metric system (as examples), then you need to train on those problems, probably using RFT. The other challenge is that even if you have this problem set in testable fashion running at scale is hard. Some benchmarks have 20k+ test cases and can take well over an hour to run. If you ran each test case sequentially it would take over 2 years to complete.
Right now the only company I'm aware of that lets you do that at scale is runloop (disclaimer, I work there).
I know there classes of problems that LLMs can't natively handle (like doing math, even simple addition... or spatial reasoning, I would assume time's in there too). There are ways they can hack around this, like writing code that performs the math.
But how would you do that for chronological reasoning? Because that would help with compacting context to know what to remember and what not.
In a way that is still helpful, especially if the act of putting the prompt together brought you to the solution organically.
Beyond that, 'clean', 'well written' and 'maintainable' are all relative terms here. In a low quality, mega legacy codebase, the results are gonna be dogshit without an intense amount of steering.
That is, humans usually don't store exactly what was written in as sentence five paragraphs ago, but rather the concept or idea conveyed. If we need details we go back and reread or similar.
And when we write or talk, we form first an overall thought about what to say, then we break it into pieces and order the pieces somewhat logically, before finally forming words that make up sentences for each piece.
From what I can see there's work on this, like this[1] and this[2] more recent paper. Again not an expert so can't comment on the quality of the references, just some I found.
However, the limitation can be masqueraded using layering techniques where output of one agent is fed as an input to another using consensus for verification or other techniques to the nth degree to minimize errors. But this is a bit like the story of a boy with a finger in the dike. Yes, you can spawn as many boys but there is a cost associated that would keep growing and wont narrow down.
It has nothing to do with contexts or window of focus or any other human centric metric. This is what the architecture is supposed to do and it does so perfectly.
Yeah this is the really big one - kind of buried the lede a little there :)
Understanding product and business requirements traditionally means communicating (either via docs and specs or directly with humans) with a bunch of people. One of the differences between a junior and senior is being able to read between the lines of a github or jira issue and know that more information needs to be teased out from… somewhere (most likely someone).
I’ve noticed that when working with AI lately I often explicitly tell them “if you need more information or context ask me before writing code”, or variations thereof. Because LLMs, like less experienced engineers, tend to think the only task is to start writing code immediately.
It will get solved though, there’s no magic in it, and LLMs are well equipped by design to communicate!
https://github.com/foolsgoldtoshi-star/foolsgoldtoshi-star-p...
_ _ kae3g
In fact I've found LLMs are reasonable at the simple task of refactoring a large file into smaller components with documentation on what each portion does even if they can't get the full context immediately. Doing this then helps the LLM later. I'm also of the opinion we should be making codebases LLM compatible. So if it happens i direct the LLM that way for 10mins and then get back to the actual task once the codebase is in a more reasonable state.
I could see in C++ it getting smarter about first checking the .h files or just grepping for function documentation, before actually trying to pull out parts of the file.
- - kae3g
Eg. "Refactor this large file into meaningful smaller components where appropriate and add code documentation on what each small component is intended to achieve." The LLM can usually handle this well (with some oversight of course). I also have instructions to document each change and why in code in the LLMs instructions.md
If the LLM does create a regression i also ask the LLM to add code documentation in the code to avoid future regressions, "Important: do not do X here as it will break Y" which again seems to help since the LLM will see that next time right there in the portion of code where it's important.
None of this verbosity in the code itself is harmful to human readers either which is nice. The end result is the codebase becomes much easier for LLMs to work with.
I suspect LLM compatibility may be a metric we measure codebases in the future as we learn more and more how to work with them. Right now LLMs themselves often create very poor LLM compatible code but by adding some more documentation in the code itself they can do much better.
The ICPC is a short (5 hours) timed contest with multiple problems, in which contestants are not allowed to use the internet.
The reason most don't get a perfect score isn't because the tasks themselves are unreasonably difficult, but because they're difficult enough that 5 hours isn't a lot of time to solve so many problems. Additionally they often require a decent amount of math / comp-sci knowledge so if you don't know have the knowledge necessary you probably won't be able complete it.
So to get a good score you need lots of math & comp-sci knowledge + you need to be a really quick coder.
Basically the consent is perfect for LLMs because they have a ton of math and comp-sci knowledge, they can spit out code at super human speeds, and the problems themselves are fairly small (they take a human maybe 15 mins to an hour to complete).
Who knows, maybe OP is right and LLMs are smart enough to be super human coders if they just had the right context, but I don't think this example proves their point well at all. These are exactly the types of problems you would expect a supercharged auto-complete would excel at.
As these tools make it possible for a single person to do more, it will become increasingly likely that society will be exposed to greater risks than that single person's (or small company's) assets can cover.
These tools already accelerate development enough that those people who direct the tools can no longer state with credibility that they've personally reviewed the code/behavior with reasonable coverage.
It'll take over-extensions of the capability of these tools, of course, before society really notices, but it remains my belief that until the tools themselves can be held liable for the quality of their output, responsibility will become the ultimate bottleneck for their development.
Humans gatekeep, especially in the tech industry, and that is exactly what will limit us improving AI over time. It will only be when we turn over it's choices to it that we move beyond all this bullshit.
That is what transformers attention does in the first place, so you would just be stacking two transformers.
You need to have the right things in the context, irrelevant stuff is not just wasteful, it is increasingly likely to cause errors. It has been shown a few times that as the context window grows, performance drops.
Heretical I know, but I find that thinking like a human goes a long way to working with AI.
Let's take the example of large migrations. You're not going to load the whole codebase in your brain and figure out what changes to make and then vomit them out into a huge PR. You're going to do it bit by bit, looking up relevant files, making changes to logically-related bits of code, and putting out a PR for each changelist.
This exactly what tools should do as well. At $PAST_JOB my team built a tool based on OpenRewrite (LLMs were just coming up) for large-scale multi-repo migrations and the centerpiece was our internal codesearch tool. Migrations were expressed as a codesearch query + codemod "recipe"; you can imagine how that worked.
That would be the best way to use AI for large-scale changes as well. Find the right snippets of code (and documentation!), load each one into the context of an agent in multiple independent tasks.
Caveat: as I understand it, this was the premise of SourceGraph's earliest forays into AI-assisted coding, but I recall one of their engineers mentioning that this turned out to be much trickier than expected. (This was a year+ back, so eons ago in LLM progress time.)
Just hypothesizing here, but it may have been that the LSIF format does not provide sufficient context. Another company in this space is Moderne (the creators of OpenRewrite) that have a much more comprehensive view of the codebase, and I hear they're having better success with large LLM-based migrations.
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?
https://www.laws-of-software.com/laws/kernighan/
Sure, you eat the elephant one bite at a time, and recursion is a thing but I wonder where the tipping point here is.
Let me tell you I’m scared of these tools. With Aider I have the most human in the loop possible each AI action is easy to undo, readable and manageable.
However even here most of the time I want AI to write a bulk of code I regret it later.
Most codebase challenges I have are infrastructural problems, where I need to reduce complexity to be able to safely add new functionality or reduce error likelihood. I’m talking solid well named abstractions.
This in the best case is not a lot of code. In general I would always rather try to have less code than more. Well named abstraction layers with good domain driven design is my goal.
When I think of switching to an AI first editor I get physical anxiety because it feels like it will destroy so many coders by leading to massive frustration.
I think still the best way of using ai is literally just chat with it about your codebase to make sure you have good practise.
Of course, subagents are a good solution here, as another poster already pointed out. But it would be nice to have something more lightweight and automated, maybe just turning on a mode where the LLM is asked to throw things out according to its own judgement, if you know you're going to be doing work with a lot of context pollution.
I imagine over time we'll restructure the way we work to take advantage of these opportunities and get a self-reinforcing productivity boost that makes things much simpler, though agents aren't quite capable enough for that breakthrough yet.
You may appreciate this illustration I made (largely with AI, of course): https://imgur.com/a/0QV5mkS
The context (heheheh) is a long-ass article on coding with AI I wrote eons ago that nobody ever read, if anybody is curious: https://news.ycombinator.com/item?id=40443374
Looking back at it, I was off on a few predictions but a number of them are coming true.
We're now using LLMs as mere tools (which is what it was meant to be from the get-go) to help us with different tasks, etc., but not to replace us, since they understand you need experienced and knowledgeable people to know what they're doing, since they won't learn everything there's to know to manage, improve and maintain tech used in our products and services. That sentiment will be the same for doctors, lawyers, etc., and personally, I won't put my life in the hands of any LLMs when it comes to finances, health, or personal well-being, for that matter.
If we get AGI, or the more sci-fi one, ASI, then all things will radically change (I'm thinking humanity reaching ASI will be akin to the episode from Love, Death & Robots: "When the Yogurt Took Over"). In the meantime, the hype cycle continues...
I really want to paraphrase kernighan's law as applied to LLMs. "If you use your whole context window to code a solution to a problem, how are you going to debug it?".
The new session throws away whatever behind-the-scenes context was causing problems, but the prepared prompt gets the new session up and running more quickly especially if picking up in the middle of a piece of work that's already in progress.
Language itself is a highly compressed form of compressed context. Like when you read "hoist with one's own petard" you don't just think about literal petard but the context behind this phrase.
Mind you, I was exactly like that when I started my career and it took quite a while and being on both sides of the conversation to improve. One difference is that it is not so easy to put oneself in the shoes of an LLM. Maybe I will improve with time. So far assuming the LLM is knowledgeable but not very smart has been the most effective strategy for my LLM interactions.
I don't run into this problem. Maybe the type of code we're working on is just very different. In my experience, if a one-line tweak is the answer and I'm spending a lot of time tweaking a prompt, then I might be holding the tool wrong.
Agree on those terms being relative. Maybe a better way of putting it is that I'm very comfortable putting my name on it, deploying to production, and taking responsibility for any bugs.
I put "vibe coded" is in quotes because the code was heavily reviewed after the process, I helped when the agent got stuck (I know pedants will complain but ), and this was definitely not my first rodeo in this domain and I just wanted to see how far an agent could go.
In the end it had a few modifications and went into prod, but to be really fair it was actually fine!
One thing I vibe coded 100% and barely looked at the code until the end was a MacOS menubar app that shows some company stats. I wanted it in Swift but WITHOUT Xcode. It was super helpful in that regard.
I’ve been trying to use shorter variable names. Maybe I should move unit tests into their own file and ignore them? It’s not idiomatic in Rust though and breaks visibility rules for the modules.
What we really need is for the agent to assemble the required context for the problem space. I suspect this is what coding agents will do if they don’t already.
"that's because a next token predictor can't "forget" context. That's just not how it works."
An LSTM is also a next token predictor and literally have a forget gate, and there are many other context compressing models too which remember only the what it thinks is important and forgets the less important, like for example: state-space models or RWKV that work well as LLMs too. But even just a the basic GPT model forgets old context since it's gets truncated if it cannot fit, but that's not really the learned smart forgetting the other models do.
Look carefully at a context window after solving a large problem, and I think in most cases you'll see even the 90th percentile token --- to say nothing of the median --- isn't valuable.
However large we're allowing frontier model context windows to get, we've got integer multiple more semantic space to allocate if we're even just a little bit smart about managing that resource. And again, this is assuming you don't recurse or divide the problem into multiple context windows.
This is how I designed my LLM chat app (https://github.com/gitsense/chat). I think agents have their place, but I really think if you want to solve complex problems without needlessly burning tokens, you will need a human in the loop to curate the context. I will get to it, but I believe in the same way that we developed different flows for working with Git, we will have different 'Chat Flows' for working with LLMs.
I have an interactive demo at https://chat.gitsense.com which shows how you can narrow the focus of the context for the LLM. Click "Start GitSense Chat Demos" then "Context Engineering & Management" to go through the 30 second demo.
Can you share you prompt?
Well, not so much the project organization stuff - it wants to stuff everything into one header and has to be browbeaten into keeping implementations out of headers.
But language semantics? It's pretty great at those. And when it screws up it's also really good at interpreting compiler error messages.
The replies of "well, just change the situation, so context doesn't matter" is irrelevant, and off-topic. The rationalizations even more so.
on actual bottles without any metaphors, the bottle neck is narrower because humans mouths are narrower.
I mean, did you try it for those purposes?
I have personally submitted an appeal to court for an issue I was having for which I would otherwise have to search almost indefinitely for a lawyer to be even interested into it.
I also debugged health opportunities from different angles using the AI and was quite successful at it.
I also experimented with the well-being topic and it gave me pretty convincing and mind opening suggestions.
So, all I can say is that it worked out pretty good in my case. I believe its already transformative in a ways we wouldn't be able even to envision couple years ago.
In theory you could attach metadata (with timestamps) to these turns, or include the timestamp in the text.
It does not affect much, other than giving the possibility for the model to make some inferences (eg. that previous message was on a different date, so its "today" is not the same "today" as in the latest message).
To chronologically fade away the importance of a conversation turn, you would need to either add more metadata (weak), progressively compact old turns (unreliable) or post-train a model to favor more recent areas of the context.
I think with appropriate instructions in the system prompt it could probably work on this code-base more like I do (heavy use of Ctrl-, in Visual Studio to jump around and read only relevant portions of the code-base).
I think the important part is to give it (in my case, these days "it" is gpt-5-codex) a target persona, just like giving it a specific problem instead of asking it to be clever or creative. I've never asked it for a summary of a long conversation without the context of why I want the summary and who the intended audience is, but I have to imagine that helps it frame its output.
The main thing is people have already integrated AI into their workflows so the "right" way for the LLM to work is the way people expect it to. For now I expect to start multiple fresh contexts while solving a single problem until I can setup a context that gets the result I want. Changing this behavior might mess me up.
I ended up spending an hour on it and dumping the context twice. I asked it to evaluate its own performance and it gave itself a D-. It came up with the measurements for a decent recipe once, then promptly forgot it when asked to summarize.
Good luck trying to use them as a search engine (or a lawyer), because they fabricate a third of the references on average (for me), unless the question is difficult, then they fabricate all of them. They also give bad, nearly unrelated references, and ignore obvious ones. I had a case when talking about the Mexican-American war where the hallucinations crowded out good references. I assume it liked the sound of the things it made up more than the things that were available.
edit: I find it baffling that GPT-5 and Quen3 often have identical hallucinations. The convergence makes me think that there's either a hard limit to how good these things can get which has been reached, or that they're just directly ripping each other off.
I hadn't considered actually rolling my own for day-to-day use, but now maybe I will. Although it's worth noting that Claude Code Hooks do give you the ability to insert your own code into the LLM loop - though not to the point of Eternal Sunshining your context, it's true.
GPT-5 is brilliant when it oneshots the right direction from the beginning, but pretty unmanageable when it goes off the rails.
Tools like Aider create a code map that basically indexes code into a small context. Which I think is similar to what we humans do when we try to understand a large codebase.
I'm not sure if Aider can then load only portions of a huge file on demand, but it seems like that should work pretty well.
You remember a fuzzy aspect of it and that is the equivalent of a summary.
The LLM is in itself a language machine so its memory will also be language. We can’t get away from that. But that doesn’t mean the hierarchical structure of how it stores information needs to be different from humans. You can encode information in anyway you like and store that information in any hierarchy we like.
So essentially We need the hierarchical structure of the “notes” that takes on the hierarchical structure of your memory. You don’t even access all your memory as a single context. You access parts of it. Your encoding may not be based on a “language” but for an LLM it’s basically a model based on language so its memory must be summaries in the specified language.
We don’t know every aspect of human memory but we do know the mind doesn’t access all memory at the same time and we do know that it compresses context. It doesn’t remember everything and it memorizes fuzzy aspects of everything. These two aspects can be replicated with the LLM entirely with text.
> If I need more, there is git, tickets, I can ask the person who wrote the code.
What does this have to do with anything? Go ahead and ask the person. The notes the LLM writes aren’t for you they are for the LLM. You do you.
Sure you can say that LLMs have unlimited context, but then what are you doing in this thread? The title on this page is saying that context is a bottleneck.
So in any situation where something can't actually be done my assumption is that it's just going to hallucinate a solution.
Has been good for busywork that I know how to do but want to save time on. When I'm directing it, it works well. When I'm asking it to direct me, it's gonna lead me off a cliff if I let it.
That may be the foundation for an innovation step in model providers. But you can achieve a poor man’s simulation if you can determine, in retrospect, when a context was at peak for taking turns, and when it got too rigid, or too many tokens were spent, and then simply replay the context up until that point.
I don’t know if evaluating when a context is worth duplicating is a thing; it’s not deterministic, and it depends on enforcing a certain workflow.
LoRa doesn't overwrite weights.
When you read a function to know what it does then you move on to another function do you have the entire 100 line function perfectly memorized? No. You memorize a summary of the intent of the function when reading code. An LLM can be set up to do the same rather than keep all 100 lines of code as context.
Do you think when you ask the other person for more context he’s going to spit out what he wrote line by line. Not even he likely will remember everything he wrote.
You think anyone memorized Linux? You know how many lines of code is in the Linux source code. Are you trolling?
The brain meaning-memorizes, and it prioritizes survival-relevant patterns and relationships over rote detail.
How does it do it, I'm not a neurobiologist, but my modest understanding is this:
LLM's summarization is a lossy compression algorithm that picks entities and parts that it deems "important" against its trained data, not only is lossy, it is wasteful as it doesn't curate what to keep or purge off accumulated experience, it does it against some statistical function that executes against a big blob of data it ingested during training. You could throw contextual cues to improve the summarization, but that's as good as it gets.
Human memory is not a workaround for a flaw. It doesn't use a hard stop at 128kb or 1mb of info, It doesn't 'summarize'.
it constructs meaning by integrating experiences into a dynamic/living model of the world, in constant motion. While we can simulate a hierarchical memory for an LLM with text summaries, it would be off simulation of possible future outcome (at best), not a replication of an evolutionary elaborated strategy to model information captured in a time frame, merged in with previously acquired knowledge to be able to then solve the upcoming survival purpose tasks the environment may throw at it. Isn't it what our brain is doing, constantly?
Plus for all we know it's possible our brain is capable of memorizing everything that can be experienced in a lifetime but would rather let the irrelevant parts of our boring life die off to save energy.
sure, in all case it's fuzzy and lossy. The difference is that you have doodling on a napkins on one side, and Vermeer paint on the other.
No it's not as good as it gets. You can tell the LLM to purge and accumulate experience into it's memory. It can curate it for sure.
"ChatGPT summarize the important parts of this text remove things that are unimportant." Then take that summary feed it into a new context window. Boom. At a high level if you can do that kind of thing with chatGPT then you can program LLMs to do the same thing similar to COT. In this case rather then building off a context window, it rewrites it's own context window into summaries.
The same protection works in reverse, if a subagent goes off the rails and either self aborts or is aborted, that large context is truncated to the abort response which is "salted" with the fact that this was stopped. Even if the subagent goes sideways and still returns success (Say separate dev, review, and test subagents) the main agent has another opportunity to compare the response and the product against the main context or to instruct a subagent to do it in a isolated context..
Not perfect at all, but better than a single context.
One other thing, there is some consensus that "don't" "not" "never" are not always functional in context. And that is a big problem. Anecdotally and experimental, many (including myself) have seen the agent diligently performing the exact thing following a "never" once it gets far enough back in the context. Even when it's a less common action.
They all have a very specific misunderstanding. Go embeds _do_ support relative paths like:
//go:embed files/hello.txt
But they DO NOT support any paths with ".." in it
//go:embed ../files/hello.txt
is not correct.
All confidently claimed that .. is correct and will work and tried to make it work multipled different ways until I pointed each to the documentation.
That said, some of the models out there (Gemini 2.5 Pro, for example) support 1M context; it's just going to be expensive and will still probably confuse the model somewhat when it comes to the output.
I can’t remember the example but there was another frequent hallucination that people were submitting bug reports that it wasn’t working, so the project looked at it and realized well actually that kinda would make sense and maybe our tool should work like that, and changed the code to work just like the LLM hallucination expected!
Also in general remember human developers hallucinate ALL THE TIME and then realize it or check documentation. So my point is I feel hallucinations are not particularly important or bother me as much as flawed reasoning.
This isn't a misconception. Context is a limitation. You can effectively have an AI agent build an entire application with a single prompt if it has enough (and the proper) context. The models with 1m context windows do better. Models with small context windows can't even do the task in many cases. I've tested this many, many, many times. It's tedious, but you can find the right model and the right prompts for success.
And if an LLM guesses (hallucinates) a specific method for your API, it really should have it - statistically speaking =)