Highlight Every Step: Knowledge Distillation via Collaborative Teaching.

Journal: IEEE transactions on cybernetics
Published Date:

Abstract

High storage and computational costs obstruct deep neural networks to be deployed on resource-constrained devices. Knowledge distillation (KD) aims to train a compact student network by transferring knowledge from a larger pretrained teacher model. However, most existing methods on KD ignore the valuable information among the training process associated with training results. In this article, we provide a new collaborative teaching KD (CTKD) strategy which employs two special teachers. Specifically, one teacher trained from scratch (i.e., scratch teacher) assists the student step by step using its temporary outputs. It forces the student to approach the optimal path toward the final logits with high accuracy. The other pretrained teacher (i.e., expert teacher) guides the student to focus on a critical region that is more useful for the task. The combination of the knowledge from two special teachers can significantly improve the performance of the student network in KD. The results of experiments on CIFAR-10, CIFAR-100, SVHN, Tiny ImageNet, and ImageNet datasets verify that the proposed KD method is efficient and achieves state-of-the-art performance.

Authors

  • Haoran Zhao
    Shanghai Jiao Tong University School of Medicine, Shanghai, China.
  • Xin Sun
    Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA USA.
  • Junyu Dong
    Ocean University of China, Qingdao, Shandong, China.
  • Changrui Chen
  • Zihe Dong