←back to thread

196 points zmccormick7 | 2 comments | | HN request time: 0s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
neutronicus ◴[] No.45387672[source]
No, I think context itself is still an issue.

Coding agents choke on our big C++ code-base pretty spectacularly if asked to reference large files.

replies(4): >>45387769 #>>45388023 #>>45388024 #>>45388311 #
1. atonse ◴[] No.45388024[source]
I've found situations where a file was too big, and then it tries to grep for what might be useful in that file.

I could see in C++ it getting smarter about first checking the .h files or just grepping for function documentation, before actually trying to pull out parts of the file.

replies(1): >>45389593 #
2. neutronicus ◴[] No.45389593[source]
Yeah, my first instinct has been to expose an LSP server as a tool so the LLM can avoid reading entire 40,000 line files just to get the implementation of one function.

I think with appropriate instructions in the system prompt it could probably work on this code-base more like I do (heavy use of Ctrl-, in Visual Studio to jump around and read only relevant portions of the code-base).