To me the amazing thing is that you can tell the model to do something, even follow simple instructions in plain English, like make a list or write some python code to do $x, that's the really amazing part.
Then ask for the same list sorted and get that nearly instantly,
These models have a short time context for now, but they already have a huge “working memory” relative to us.
It is very cool. And indicative that vastly smarter models are going to be achieved fairly easily, with new insight.
Our biology has had to ruthlessly work within our biological/ecosystem energy envelope, and with the limited value/effort returned by a pre-internet pre-vast economy.
So biology has never been able to scale. Just get marginally more efficient and effective within tight limits.
Suddenly, (in historical, biological terms), energy availability limits have been removed, and limits on the value of work have compounded and continue to do so. Unsurprising that those changes suddenly unlock easily achieved vast untapped room for cognitive upscaling.
I don't think your second sentence logically follows from the first.
Relative to us, these models:
- Have a much larger working memory.
- Have much more limited logical reasoning skills.
To some extent, these models are able to use their superior working memories to compensate for their limited reasoning abilities. This can make them very useful tools! But there may well be a ceiling to how far that can go.
When you ask a model to "think about the problem step by step" to improve its reasoning, you are basically just giving it more opportunities to draw on its huge memory bank and try to put things together. But humans are able to reason with orders of magnitude less training data. And by the way, we are out of new training data to give the models.
Common belief, but false. You start learning from inside the womb. The data flow increases exponentially when you open your eyes and then again when you start manipulating things with your hands and mouth.
> When you ask a model to "think about the problem step by step" to improve its reasoning, you are basically just giving it more opportunities to draw on its huge memory bank and try to put things together.
We do the same with children. At least I did it to my classmates when they asked me for help. I'd give them a hint, and ask them to work it out step by step from there. It helped.
But you don't get data equal to the entire internet as a child!
> We do the same with children. At least I did it to my classmates when they asked me for help. I'd give them a hint, and ask them to work it out step by step from there. It helped.
And I do it with my students. I still think there's a difference in kind between when I listen to my students (or other adults) reason through a problem, and when I look at the output of an AI's reasoning, but I admittedly couldn't tell you what that is, so point taken. I still think the AI is relying far more heavily on its knowledge base.
Given vision and the other senses, I’d argue that your average toddler has probably trained on more sensory information than the largest LLMs ever built long before they learn to talk.
Then there's the whole slew of processes that pick up two or three key points of data and then fill in the rest (EX the moonwalking bear experiment [0]).
I guess all I'm saying is that raw input isn't the only piece of the puzzle. Maybe it is at the start before a kiddo _knows_ how to focus and filter info?