←back to thread

196 points zmccormick7 | 2 comments | | HN request time: 0.006s | source
Show context
ninetyninenine ◴[] No.45387536[source]
Context is a bottleneck for humans as well. We don’t have full context when going through the code because we can’t hold full context.

We summarize context and remember summarizations of it.

Maybe we need to do this with the LLM. Chain of thought sort of does this but it’s not deliberate. The system prompt needs to mark this as a deliberate task of building summaries and notes notes of the entire code base and this summarized context of the code base with gotchas and aspects of it can be part of permanent context the same way ChatGPT remembers aspects of you.

The summaries can even be sectioned off and and have different levels of access. So if the LLM wants to drill down to a subfolder it looks at the general summary and then it looks at another summary for the sub folder. It doesn’t need to access the full summary for context.

Imagine a hierarchy of system notes and summaries. The LLM decides where to go and what code to read while having specific access to notes it left previously when going through the code. Like the code itself it never reads it all it just access sections of summaries that go along with the code. It’s sort of like code comments.

We also need to program it to change the notes every time it changes the program. And when you change the program without consulting AI, every commit you do the AI also needs to update the notes based off of your changes.

The LLM needs a system prompt that tells it to act like us and remember things like us. We do not memorize and examine full context of anything when we dive into code.

replies(5): >>45387553 #>>45387652 #>>45387653 #>>45387660 #>>45387816 #
anthonypasq ◴[] No.45387816[source]
youre projecting a deficiency of the human brain onto computers. computers have advantages that our brains dont (perfect and large memory), theres no reason to think that we should try to recreate how humans do things.

why would you bother with all these summaries if you can just read and remember the code perfectly.

replies(1): >>45391142 #
1. ninetyninenine ◴[] No.45391142[source]
Because the context window of the LLM is limited similar to humans. That’s the entire point of the article. If the LLM has similar limitations to humans than we give it similar work arounds.

Sure you can say that LLMs have unlimited context, but then what are you doing in this thread? The title on this page is saying that context is a bottleneck.

replies(1): >>45414961 #
2. anthonypasq ◴[] No.45414961[source]
memory is different than context