←back to thread

334 points mooreds | 10 comments | | HN request time: 0.61s | source | bottom
1. vessenes ◴[] No.44484424[source]
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

replies(4): >>44484512 #>>44485388 #>>44486146 #>>44487909 #
2. kranke155 ◴[] No.44484512[source]
I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.

replies(3): >>44484976 #>>44487626 #>>44492316 #
3. eikenberry ◴[] No.44484976[source]
When someone says 10 years out in tech it means there are several needed breakthroughs that they think could possibly happen if things go just right. Being an expert doesn't make the 10 years more accurate, it makes the 'breakthroughs needed' part more meaningful.
4. nicoburns ◴[] No.44485388[source]
How long is "long"? Real humans have context windows measured in decades of realtime multimodal input.
replies(2): >>44487895 #>>44489678 #
5. Davidzheng ◴[] No.44486146[source]
I'm sure we'll have true test-time-learning soon (<5 years)but it will be more expensive. Alphaproof (for Deepmind's IMO attempt) already has this.
6. moralestapia ◴[] No.44487626[source]
>He basically made up the field (out of academia) for a large number of years

Not even close.

7. MarcelOlsz ◴[] No.44487895[source]
Speak for yourself. I can barely remember what I did yesterday.
8. imtringued ◴[] No.44487909[source]
There was a company that claimed to have solved it and we hear nothing but the sound of crickets from them.
9. vessenes ◴[] No.44489678[source]
I think there’s a good clue here to what may work for frontier models — you definitely do not remember everything about a random day 15 years ago. By the same token, you almost certainly remember some things about a day much longer ago than that, if something significant happened. So, you have some compression / lossy memory working that lets you not just be a tabula rasa about anything older than [your brain’s memory capacity].

Some architectures try to model this infinite, but lossy, horizon with functions that are amenable as a pass on the input context. So far none of them seem to beat the good old attention head, though.

10. alkyon ◴[] No.44492316[source]
This guy has a vested interest in talking nonsense about AGI to attract investors' money and government subsidies worth billions.

Privately, he doesn't think it's likely in next 25 years.