←back to thread

149 points themgt | 1 comments | | HN request time: 0.195s | source
1. heavymemory ◴[] No.46191281[source]
The idea is interesting, but I still don’t understand how this is supposed to solve continual learning in practice.

You’ve got a frozen transformer and a second module still trained with SGD, so how exactly does that solve forgetting instead of just relocating it?