←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0.271s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
ray__ ◴[] No.45387639[source]
This is a great insight. Any thoughts on how to address this problem?
replies(3): >>45387751 #>>45387782 #>>45387912 #
throwup238 ◴[] No.45387751[source]
It has to be addressed architecturally with some sort of extension to transformers that can focus the attention on just the relevant context.

People have tried to expand context windows by reducing the O(n^2) attention mechanism to something more sparse and it tends to perform very poorly. It will take a fundamental architectural change.

replies(3): >>45387795 #>>45387930 #>>45388296 #
1. magicalhippo ◴[] No.45387930[source]
I'm not an expert but it seemed fairly reasonable to me that a hierarchical model would be needed to approach what humans can do, as that's basically how we process data as well.

That is, humans usually don't store exactly what was written in as sentence five paragraphs ago, but rather the concept or idea conveyed. If we need details we go back and reread or similar.

And when we write or talk, we form first an overall thought about what to say, then we break it into pieces and order the pieces somewhat logically, before finally forming words that make up sentences for each piece.

From what I can see there's work on this, like this[1] and this[2] more recent paper. Again not an expert so can't comment on the quality of the references, just some I found.

[1]: https://aclanthology.org/2022.findings-naacl.117/

[2]: https://aclanthology.org/2025.naacl-long.410/