←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
ray__ ◴[] No.45387639[source]
This is a great insight. Any thoughts on how to address this problem?
replies(3): >>45387751 #>>45387782 #>>45387912 #
atonse ◴[] No.45387912[source]
Do we know if LLMs understand the concept of time? (like i told you this in the past, but what i told you later should supersede it?)

I know there classes of problems that LLMs can't natively handle (like doing math, even simple addition... or spatial reasoning, I would assume time's in there too). There are ways they can hack around this, like writing code that performs the math.

But how would you do that for chronological reasoning? Because that would help with compacting context to know what to remember and what not.

replies(2): >>45388155 #>>45389376 #
1. sebastiennight ◴[] No.45389376[source]
All it sees is a big blob of text, some of which can be structured to differentiate turns between "assistant", "user", "developer" and "system".

In theory you could attach metadata (with timestamps) to these turns, or include the timestamp in the text.

It does not affect much, other than giving the possibility for the model to make some inferences (eg. that previous message was on a different date, so its "today" is not the same "today" as in the latest message).

To chronologically fade away the importance of a conversation turn, you would need to either add more metadata (weak), progressively compact old turns (unreliable) or post-train a model to favor more recent areas of the context.