Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.
My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.
Right now the long context models have highly variable quality across their windows.
But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.
replies(4):