Deep neural networks are known to suffer from what’s known as the catastrophic forgetting problem, wherein they tend to forget the knowledge accrued from the previous task upon sequentially learning new tasks. Such failure hinders continual learning of neural networks in practice.

In this work, we present a simple yet surprisingly effective way to prevent catastrophic forgetting. Our method, called Few-Shot Self Reminder (FSR), regularizes the neural net from changing its learned behaviour by performing logit matching on selected samples kept in episodic memory from the previous tasks.

Surprisingly, this simplistic approach only requires retraining a small amount of data in order to outperform previous knowledge retention methods. We demonstrate the superiority of our method to previous ones on popular benchmarks, as well as a new continual learning problem where tasks are designed to be more dissimilar. 

Related Research