←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0s | source
Show context
ninetyninenine ◴[] No.45387536[source]
Context is a bottleneck for humans as well. We don’t have full context when going through the code because we can’t hold full context.

We summarize context and remember summarizations of it.

Maybe we need to do this with the LLM. Chain of thought sort of does this but it’s not deliberate. The system prompt needs to mark this as a deliberate task of building summaries and notes notes of the entire code base and this summarized context of the code base with gotchas and aspects of it can be part of permanent context the same way ChatGPT remembers aspects of you.

The summaries can even be sectioned off and and have different levels of access. So if the LLM wants to drill down to a subfolder it looks at the general summary and then it looks at another summary for the sub folder. It doesn’t need to access the full summary for context.

Imagine a hierarchy of system notes and summaries. The LLM decides where to go and what code to read while having specific access to notes it left previously when going through the code. Like the code itself it never reads it all it just access sections of summaries that go along with the code. It’s sort of like code comments.

We also need to program it to change the notes every time it changes the program. And when you change the program without consulting AI, every commit you do the AI also needs to update the notes based off of your changes.

The LLM needs a system prompt that tells it to act like us and remember things like us. We do not memorize and examine full context of anything when we dive into code.

replies(5): >>45387553 #>>45387652 #>>45387653 #>>45387660 #>>45387816 #
wat10000 ◴[] No.45387652[source]
They need a proper memory. Imagine you're a very smart, skilled programmer but your memory resets every hour. You could probably get something done by making extensive notes as you go along, but you'll still be smoked by someone who can actually remember what they were doing in the morning. That's the situation these coding agents are in. The fact that they do as well as they do is remarkable, considering.
replies(2): >>45387725 #>>45388029 #
1. multiplegeorges ◴[] No.45387725[source]
Basically, LLMs are the guy from Memento.