If that’s the case, then the gulf between current techniques and what’s needed seems knowable. A means of approximating continuous time between neuron firing, time-series recognition in inputs, learning behavior on inputs prior to actual neuron firing (akin to behavior of dendrites), etc. are all missing functionalities in current techniques. Some or all of these missing parts of biological neuron behavior might be needed to approximate animal intelligence, but I think it’s a good guess that these are the parts that are missing.
AI currently has enormous amounts of money being dumped into it on techniques that are lacking for what we want to achieve with it. As they falter more and more, there will be an enormous financial interest in creating new, more effective techniques, and the most obvious place to look for inspiration will be biology. That’s why I think it’s likely to happen in the next few decades; the hardware should be there in terms of raw compute, there’s an obvious place to look for new ideas, and there’s a ton of financial interest in it.
Firstly, by some researchers in the big labs (some of which I'm sure are funded to try random moonshot bets like the above), at non-product labs working on hard problems (eg World Labs), and especially within academia where researchers have taken inspiration from biology before, and today are even better funded and hungry for new discoveries.
Certainly at my university, some researchers are slightly detached from the hype cycle of NeurIPS publications and are trying interdisciplinary approaches to bigger problems. Though, admittedly less than I'd have hoped for). I do think the pressure to be a paper machine limits people from trying bets that are realistically very likely to fail.