Deep neural networks are known to suffer from what’s known as the catastrophic forgetting problem, wherein they tend to forget the knowledge accrued from the previous task upon sequentially learning new tasks. Such failure hinders continual learning of neural networks in practice.
In this work, we present a simple yet surprisingly effective way to prevent catastrophic forgetting. Our method, called Few-Shot Self Reminder (FSR), regularizes the neural net from changing its learned behaviour by performing logit matching on selected samples kept in episodic memory from the previous tasks.
Surprisingly, this simplistic approach only requires retraining a small amount of data in order to outperform previous knowledge retention methods. We demonstrate the superiority of our method to previous ones on popular benchmarks, as well as a new continual learning problem where tasks are designed to be more dissimilar.
Related Research
-
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
T. Sylvain, L. Meng, and A. Lehrmann.
Research
-
Efficient CDF Approximations for Normalizing Flows
Efficient CDF Approximations for Normalizing Flows
C.S. Sastry, A. Lehrmann, M. Brubaker, and A. Radovic. Transactions on Machine Learning Research (TMLR)
Publications
-
PUMA: Performance Unchanged Model Augmentation for Training Data Removal
PUMA: Performance Unchanged Model Augmentation for Training Data Removal
G. Wu, M. Hashemi, and C. Srinivasa. Association for the Advancement of Artificial Intelligence (AAAI)
Publications