←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
neutronicus ◴[] No.45387672[source]
No, I think context itself is still an issue.

Coding agents choke on our big C++ code-base pretty spectacularly if asked to reference large files.

replies(4): >>45387769 #>>45388023 #>>45388024 #>>45388311 #
Someone1234 ◴[] No.45387769[source]
Yeah, I have the same issue too. Even for a file with several thousand lines, they will "forget" earlier parts of the file they're still working in resulting in mistakes. They don't need full awareness of the context, but they need a summary of it so that they can go back and review relevant sections.

I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.

replies(3): >>45388022 #>>45388231 #>>45390936 #
1. bongodongobob ◴[] No.45388231[source]
Interestingly, this issue has caused me to refactor and modularize code that I should have addressed a long time ago, but didn't have the time or stamina to tackle. Because the LLM can't handle the context, it has helped me refactor stuff (seems to be very good at this in my experience) and that has led me to write cleaner and more modular code that the LLMs can better handle.