Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.
Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.
But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.
With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.
so these arguments by fundamental distinctions I believe all cannot work--the question is how new are the AI contributions. Nowadays there's of course still no theoretical breakthroughs in mathematics from AI (though biology could be close!). Also I think the AIs have understanding--but tbf the only thing we can test is through testing on tricky questions which I think support my side. Though of course some of these questions have interpretations which are not testable--so I don't want to argue about those.
The reason I believe it can be achieved in this time frame is that I believe that you can do much more with non-output tokens than is currently being done.
If that’s the case, then the gulf between current techniques and what’s needed seems knowable. A means of approximating continuous time between neuron firing, time-series recognition in inputs, learning behavior on inputs prior to actual neuron firing (akin to behavior of dendrites), etc. are all missing functionalities in current techniques. Some or all of these missing parts of biological neuron behavior might be needed to approximate animal intelligence, but I think it’s a good guess that these are the parts that are missing.
AI currently has enormous amounts of money being dumped into it on techniques that are lacking for what we want to achieve with it. As they falter more and more, there will be an enormous financial interest in creating new, more effective techniques, and the most obvious place to look for inspiration will be biology. That’s why I think it’s likely to happen in the next few decades; the hardware should be there in terms of raw compute, there’s an obvious place to look for new ideas, and there’s a ton of financial interest in it.
Firstly, by some researchers in the big labs (some of which I'm sure are funded to try random moonshot bets like the above), at non-product labs working on hard problems (eg World Labs), and especially within academia where researchers have taken inspiration from biology before, and today are even better funded and hungry for new discoveries.
Certainly at my university, some researchers are slightly detached from the hype cycle of NeurIPS publications and are trying interdisciplinary approaches to bigger problems. Though, admittedly less than I'd have hoped for). I do think the pressure to be a paper machine limits people from trying bets that are realistically very likely to fail.
I guess if you believe this, then the AI is already smarter than you.