←back to thread

336 points mooreds | 1 comments | | HN request time: 1.019s | source
Show context
merizian ◴[] No.44484927[source]
The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.

In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.

I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.

replies(3): >>44486063 #>>44486155 #>>44488036 #
1. Davidzheng ◴[] No.44486155[source]
Well Alphaproof used test-time-training methods to generate similar problems (alphazero style) for each question it encounters.