Efficient Perturbation Inference and Expandable Network for continual learning.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Nov 7, 2022
Abstract
Although humans are capable of learning new tasks without forgetting previous ones, most neural networks fail to do so because learning new tasks could override the knowledge acquired from previous data. In this work, we alleviate this issue by proposing a novel Efficient Perturbation Inference and Expandable Network (EPIE-Net), which dynamically expands lightweight task-specific decoders for new classes and utilizes a mixed-label uncertainty strategy to improve the robustness. Moreover, we calculate the average probability of perturbed samples at inference, which can generally improve the performance of the model. Experimental results show that our method consistently outperforms other methods with fewer parameters in class incremental learning benchmarks. For example, on the CIFAR-100 10 steps setup, our method achieves an average accuracy of 76.33% and the last accuracy of 65.93% within only 3.46M average parameters.