←back to thread

333 points mooreds | 1 comments | | HN request time: 0.402s | source
Show context
vessenes ◴[] No.44484424[source]
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

replies(4): >>44484512 #>>44485388 #>>44486146 #>>44487909 #
1. Davidzheng ◴[] No.44486146[source]
I'm sure we'll have true test-time-learning soon (<5 years)but it will be more expensive. Alphaproof (for Deepmind's IMO attempt) already has this.