Progressive fine-to-coarse reconstruction for accurate low-bit post-training quantization in vision transformers.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Due to its efficiency, Post-Training Quantization (PTQ) has been widely adopted for compressing Vision Transformers (ViTs). However, when quantized into low-bit representations, there is often a significant performance drop compared to their full-precision counterparts. To address this issue, reconstruction methods have been incorporated into the PTQ framework to improve performance in low-bit quantization settings. Nevertheless, existing related methods apply the single and fixed reconstruction granularity and seldom explore the progressive relationships between different reconstruction granularities, which leads to sub-optimal quantization results in ViTs. To this end, in this paper, we propose a Progressive Fine-to-Coarse Reconstruction (PFCR) method for accurate PTQ, which significantly improves the performance of low-bit quantized vision transformers. Specifically, we define multi-head self-attention and multi-layer perceptron modules along with their shortcuts as the finest reconstruction units. After reconstructing these two fine-grained units, we combine them to form coarser blocks and reconstruct them at a coarser granularity level. We iteratively perform this combination and reconstruction process, achieving progressive fine-to-coarse reconstruction. Additionally, we introduce a Progressive Optimization Strategy (POS) for PFCR to alleviate the difficulty of training, thereby further enhancing model performance. Experimental results on the ImageNet dataset demonstrate that our proposed method achieves the best Top-1 accuracy among state-of-the-art methods, particularly attaining 75.61% for 3-bit quantized ViT-B in PTQ. Besides, quantization results on the COCO dataset reveal the effectiveness and generalization of our proposed method on other computer vision tasks like object detection and instance segmentation.

Authors

  • Rui Ding
    National Laboratory of Solid State Microstructures, College of Engineering and Applied Sciences, Nanjing University, 22 Hankou Road, Nanjing 210093, P. R. China.
  • Liang Yong
    School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 401331, China. Electronic address: 202212131152@stu.cqu.edu.cn.
  • Sihuan Zhao
    School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 401331, China. Electronic address: 17762330892@163.com.
  • Jing Nie
    National Clinical Research Center for Kidney Disease, State Key Laboratory for Organ Failure Research, Division of Nephrology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, Guangdong Province, China.
  • Lihui Chen
    School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 401331, China. Electronic address: lihuichen@126.com.
  • Haijun Liu
    School of Electronic Engineering, University of Electronic Science and Technology of China, China.
  • Xichuan Zhou