←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0s | source
Show context
ninetyninenine ◴[] No.45387536[source]
Context is a bottleneck for humans as well. We don’t have full context when going through the code because we can’t hold full context.

We summarize context and remember summarizations of it.

Maybe we need to do this with the LLM. Chain of thought sort of does this but it’s not deliberate. The system prompt needs to mark this as a deliberate task of building summaries and notes notes of the entire code base and this summarized context of the code base with gotchas and aspects of it can be part of permanent context the same way ChatGPT remembers aspects of you.

The summaries can even be sectioned off and and have different levels of access. So if the LLM wants to drill down to a subfolder it looks at the general summary and then it looks at another summary for the sub folder. It doesn’t need to access the full summary for context.

Imagine a hierarchy of system notes and summaries. The LLM decides where to go and what code to read while having specific access to notes it left previously when going through the code. Like the code itself it never reads it all it just access sections of summaries that go along with the code. It’s sort of like code comments.

We also need to program it to change the notes every time it changes the program. And when you change the program without consulting AI, every commit you do the AI also needs to update the notes based off of your changes.

The LLM needs a system prompt that tells it to act like us and remember things like us. We do not memorize and examine full context of anything when we dive into code.

replies(5): >>45387553 #>>45387652 #>>45387653 #>>45387660 #>>45387816 #
maerF0x0 ◴[] No.45387553[source]
> remember summarizations

yes, and if you're an engineering manager you retain _out of date_ summarizations, often materially out of date.

replies(1): >>45387626 #
ninetyninenine ◴[] No.45387626[source]
I addressed this. The AI needs to examine every code change going in whether that code change comes from AI or not and edit the summaries accordingly.

This is something humans dont actually do. We aren’t aware of every change and we don’t have updated documentation of every change so the LLM will be doing better in this regard.

replies(1): >>45387674 #
lomase ◴[] No.45387674[source]
I mean... have you ever heard of this small tool called GIT that people use to track code changes?
replies(1): >>45387707 #
ninetyninenine ◴[] No.45387707[source]
I’m not talking about git diffs. I’m talking about the summaries of context. Every commit the ai needs to update the summaries and notes it took about the code.

Did you read the entirety of what I wrote? Please read.

Say the AI left a 5 line summary of a 300 line piece of code. You as a human update that code. What I am saying specifically is this: when you do the change, The AI then sees this and updates the summary. So AI needs to be interacting with every code change whether or not you used it to vibe code.

The next time the AI needs to know what this function does, it doesn’t need to read the entire 300 line function. It reads the 5 line summary, puts it in the context window and moves on with chain of thought. Understand?

This is what shrinks the context. Humans don’t have unlimited context either. We have vague fuzzy memories of aspects of the code and these “notes” effectively make coding agents do the same thing.

replies(1): >>45388554 #
lomase ◴[] No.45388554{3}[source]
The context is the code I work on because I can read and understand it.

If I need more, there is git, tickets, I can ask the person who wrote the code.

I do have read your comment, don't make snarky comments.

replies(1): >>45391128 #
ninetyninenine ◴[] No.45391128{4}[source]
So you hold all that code context in your head at the same time?

> If I need more, there is git, tickets, I can ask the person who wrote the code.

What does this have to do with anything? Go ahead and ask the person. The notes the LLM writes aren’t for you they are for the LLM. You do you.

replies(2): >>45391149 #>>45391350 #
lomase ◴[] No.45391350{5}[source]
So you hold all that code context in your head at the same time?

Yes. That is how every single piece of code has been writen since the creation of computers.

Why you seem so surprised?

replies(2): >>45392429 #>>45401453 #
1. ninetyninenine ◴[] No.45401453{6}[source]
No reply? Probably because you've realized how much of an idiot you are?