←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0.206s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
neutronicus ◴[] No.45387672[source]
No, I think context itself is still an issue.

Coding agents choke on our big C++ code-base pretty spectacularly if asked to reference large files.

replies(4): >>45387769 #>>45388023 #>>45388024 #>>45388311 #
Someone1234 ◴[] No.45387769[source]
Yeah, I have the same issue too. Even for a file with several thousand lines, they will "forget" earlier parts of the file they're still working in resulting in mistakes. They don't need full awareness of the context, but they need a summary of it so that they can go back and review relevant sections.

I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.

replies(3): >>45388022 #>>45388231 #>>45390936 #
1. hadlock ◴[] No.45390936[source]
I've started getting in the habit of finding seams in files > 1500 lines long. Occasionally it is unavoidable, but very regularly.