> This strongly suggests that the model is able to memorize solutions from its training set
I'm not sure why this is a problem - surely in systems like chatGPT we want the specifics that was in the training set not a generalization. It's not learning/reasoning from the training data its 'cleverly regurgitating' things its seen.
replies(1):