←back to thread

336 points mooreds | 1 comments | | HN request time: 0.207s | source
Show context
vessenes ◴[] No.44484424[source]
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

replies(4): >>44484512 #>>44485388 #>>44486146 #>>44487909 #
nicoburns ◴[] No.44485388[source]
How long is "long"? Real humans have context windows measured in decades of realtime multimodal input.
replies(2): >>44487895 #>>44489678 #
1. MarcelOlsz ◴[] No.44487895[source]
Speak for yourself. I can barely remember what I did yesterday.