As a leader in the field of continual learning, I somewhat agree, but I'd say that catastrophic forgetting is largely resolved. The problem is that the continual learning community largely has become insular and is mostly focusing on toy problems that don't matter, where they will even avoid good solutions for nonsensical reasons. For example, reactivation / replay / rehearsal works well for mitigating catastrophic forgetting almost entirely, but a lot of the continual learning community mostly dislikes it because it is very effective. A lot of the work is focusing on toy problems and they refuse to scale up. I wrote this paper with some of my colleagues on this issue, although with such a long author list it isn't as focused as I would have liked in terms of telling the continual learning community to get out of its rut such that they are writing papers that advance AI rather than are just written for other continual learning researchers:
https://arxiv.org/abs/2311.11908The majority are focusing on the wrong paradigms and the wrong questions, which blocks progress towards the kinds of continual learning needed to make progress towards creating models that think in latent space and enabling meta-cognition, which would then give architectures the ability to avoid hallucinations by knowing what they don't know.