In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.
Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.
This is true also for the much bigger neural net that works in your brain, and even if you're the world champion of chess. Clearly your argument doesn't hold water.
At playing chess. (But also at doing sums and multiplications, yay!)
> So you should also agree with me that those who say the only path to AGI is LLM maximalism are misguided.
No. First of all, it's a claim you just made up. What we're talking about is people saying that LLMs are not the path to AGI- an entirely different claim.
Second, assuming there's any coherence to your argument, the fact that a small program can outclass an enormous NN is irrelevant to the question of whether the enormous NN is the right way to achieve AGI: we are "general intelligences" and we are defeated by the same chess program. Unless you mean that achieving the intelligence of the greatest geniuses that ever lived is still not enough.