> The reason why I in particular am so interested in continual learning has pretty much zero to do with humans. Sensors and mechanical systems change their properties over time through wear and tear.
To be clear, this isn’t what Dwarkesh was pointing at, and I think you are using the term “continual learning” differently to him. And he is primarily interested in it because humans do it.
The article introduces a story about how humans learn, and calls it continual learning:
> How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student … This just wouldn’t work … Yes, there’s RL fine tuning. But it’s just not a deliberate, adaptive process the way human learning is.
The point I’m making is just that this is bad form: “AIs can’t do X, but humans can. Humans do task X because they have Y, but AIs don’t have Y, so AIs will find X hard.” Consider I replace X with “common sense reasoning” and Y with “embodied experience”. That would have seemed reasonable in 2020, but ultimately would have been a bad bet.
I don’t disagree with anything else in your response. I also buy into bitter lesson (and generally: easier to measure => easier to optimize). I think it’s just different uses of the same terms. And I don’t necessarily think what you’re referring to as continual learning won’t work.