←back to thread

334 points mooreds | 1 comments | | HN request time: 0.239s | source
Show context
vessenes ◴[] No.44484424[source]
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

replies(4): >>44484512 #>>44485388 #>>44486146 #>>44487909 #
kranke155 ◴[] No.44484512[source]
I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.

replies(3): >>44484976 #>>44487626 #>>44492316 #
1. moralestapia ◴[] No.44487626[source]
>He basically made up the field (out of academia) for a large number of years

Not even close.