Compression-enabled interpretability of voxelwise encoding models.

Journal: PLoS computational biology
PMID:

Abstract

Voxelwise encoding models based on convolutional neural networks (CNNs) are widely used as predictive models of brain activity evoked by natural movies. Despite their superior predictive performance, the huge number of parameters in CNN-based models have made them difficult to interpret. Here, we investigate whether model compression can build more interpretable and more stable CNN-based voxelwise models while maintaining accuracy. We used multiple compression techniques to prune less important CNN filters and connections, a receptive field compression method to select receptive fields with optimal center and size, and principal component analysis to reduce dimensionality. We demonstrate that the model compression improves the accuracy of identifying visual stimuli in a hold-out test set. Additionally, compressed models offer a more stable interpretation of voxelwise pattern selectivity than uncompressed models. Finally, the receptive field-compressed models reveal that the optimal model-based population receptive fields become larger and more centralized along the ventral visual pathway. Overall, our findings support using model compression to build more interpretable voxelwise models.

Authors

  • Fatemeh Kamali
    Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran.
  • Amir Abolfazl Suratgar
    Department of Electrical Engineering, Amirkabir University of Technology, Tehran, Iran.
  • Mohammadbagher Menhaj
    Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran.
  • Reza Abbasi-Asl
    Department of Neurology, Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, California, United States of America.