←back to thread

334 points mooreds | 4 comments | | HN request time: 0s | source
Show context
vessenes ◴[] No.44484424[source]
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

replies(4): >>44484512 #>>44485388 #>>44486146 #>>44487909 #
1. kranke155 ◴[] No.44484512[source]
I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.

replies(3): >>44484976 #>>44487626 #>>44492316 #
2. eikenberry ◴[] No.44484976[source]
When someone says 10 years out in tech it means there are several needed breakthroughs that they think could possibly happen if things go just right. Being an expert doesn't make the 10 years more accurate, it makes the 'breakthroughs needed' part more meaningful.
3. moralestapia ◴[] No.44487626[source]
>He basically made up the field (out of academia) for a large number of years

Not even close.

4. alkyon ◴[] No.44492316[source]
This guy has a vested interest in talking nonsense about AGI to attract investors' money and government subsidies worth billions.

Privately, he doesn't think it's likely in next 25 years.