←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0.001s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
neutronicus ◴[] No.45387672[source]
No, I think context itself is still an issue.

Coding agents choke on our big C++ code-base pretty spectacularly if asked to reference large files.

replies(4): >>45387769 #>>45388023 #>>45388024 #>>45388311 #
Someone1234 ◴[] No.45387769[source]
Yeah, I have the same issue too. Even for a file with several thousand lines, they will "forget" earlier parts of the file they're still working in resulting in mistakes. They don't need full awareness of the context, but they need a summary of it so that they can go back and review relevant sections.

I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.

replies(3): >>45388022 #>>45388231 #>>45390936 #
AnotherGoodName ◴[] No.45388022[source]
I do take that as a sign to refactor when it happens though. Even if not for the sake of LLM compatibility with the codebase it cuts down merge conflicts to refactor large files.

In fact I've found LLMs are reasonable at the simple task of refactoring a large file into smaller components with documentation on what each portion does even if they can't get the full context immediately. Doing this then helps the LLM later. I'm also of the opinion we should be making codebases LLM compatible. So if it happens i direct the LLM that way for 10mins and then get back to the actual task once the codebase is in a more reasonable state.

replies(1): >>45388498 #
Someone1234 ◴[] No.45388498{3}[source]
I'm trying to use LLMs to save me time and resources, "refactor your entire codebase, so the tool can work" is the opposite of that. Regardless of how you rationalize it.
replies(1): >>45388933 #
thunky ◴[] No.45388933{4}[source]
It may be a good idea to refactor even if not for LLMs but for humans sake.
replies(1): >>45389095 #
Someone1234 ◴[] No.45389095{5}[source]
Right, but the discussion we're having here is context size. I, and others, are saying that the current context size is a limitation on when they can use the tool to be useful.

The replies of "well, just change the situation, so context doesn't matter" is irrelevant, and off-topic. The rationalizations even more so.

replies(1): >>45390931 #
thunky ◴[] No.45390931{6}[source]
A huge context is a problem for humans too, which is why I think it's fair to suggest maybe the tool isn't the (only) problem.

Tools like Aider create a code map that basically indexes code into a small context. Which I think is similar to what we humans do when we try to understand a large codebase.

I'm not sure if Aider can then load only portions of a huge file on demand, but it seems like that should work pretty well.

replies(1): >>45395565 #
1. KronisLV ◴[] No.45395565{7}[source]
As someone who's worked with both more fragmented/modular codebases with smaller classes and shorter files vs ones that span thousands of lines (sometimes even double digits), I very much prefer the former and hate the latter.

That said, some of the models out there (Gemini 2.5 Pro, for example) support 1M context; it's just going to be expensive and will still probably confuse the model somewhat when it comes to the output.