Red alarm: Controllable backdoor attack in continual learning.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Continual learning (CL) studies the problem of learning a single model from a sequence of disjoint tasks. The main challenge is to learn without catastrophic forgetting, a scenario in which the model's performance on previous tasks degrades significantly as new tasks are added. However, few works focus on the security challenge in the CL setting. In this paper, we focus on the backdoor attack in the CL setting. Specifically, we provide the threat model and explore what attackers in a CL setting will face. Based on these findings, we propose a controllable backdoor attack mechanism in continual learning (CBACL). Experimental results on the Split Cifar and Tiny Imagenet datasets confirm the advantages of our proposed mechanism.

Authors

  • Rui Gao
    School of Control Science and Engineering, Shandong University, Jinan, China.
  • Weiwei Liu
    School of Nursing, Capital Medical University, No. 10, Xi tou tiao, You An Men Wai, Feng tai District, Beijing, 100069 China.