No Fine-Tuning, No Cry: Robust SVD for Compressing Deep Networks.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

A common technique for compressing a neural network is to compute the -rank ℓ2 approximation Ak of the matrix A∈Rn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, is the number of input neurons in the layer, is the number in the next one, and Ak is stored in O((n+d)k) memory instead of O(nd). Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the -rank ℓ2 approximation with ℓp, for p∈[1,2], which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p≥1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage.

Authors

  • Murad Tukan
    The Robotics and Big Data Lab, Department of Computer Science, University of Haifa, Haifa 3498838, Israel.
  • Alaa Maalouf
    The Robotics and Big Data Lab, Department of Computer Science, University of Haifa, Haifa 3498838, Israel.
  • Matan Weksler
    Samsung Research Israel, Herzliya 4659071, Israel.
  • Dan Feldman
    The Robotics and Big Data Lab, Department of Computer Science, University of Haifa, Haifa 3498838, Israel.