Attribution-based interpretable classification neural network with global and local perspectives.

Journal: Scientific reports
Published Date:

Abstract

Neural networks are challenging to apply in domains requiring high reliability due to their black-box nature, and researchers are increasingly focusing on interpreting neural networks. While pursuing neural network performance, most methods often sacrifice interpretability by interpreting the model after training, which is often local and does not provide more detailed information. To obtain both great interpretability and classification performance, we propose an attribution-based interpretable classification model for tabular data, that maps the intermediate output to the interpretable data representation space and automatically selects the corresponding feature values for classification and interpretation. It can assign an importance value to each input feature of an instance to achieve local interpretability while also reflecting the global importance of input features. Furthermore, we propose different training methods. While finding the best way to train the model, we discover there is a trade-off between classification performance and interpretability. Experimental results on eight open-source datasets show that our method is comparable to the competitive black-box neural networks concerning classification accuracy. Regarding two metrics of attribution methods, Reverse Precision and Generality, our model outperforms two popular post-hoc interpretable methods.

Authors

  • Zihao Shi
    Guangxi University, College of Computer, Electronics and Information, Nanning, 530004, China.
  • Zuqiang Meng
    School of Computer and Electronic Information, Guangxi University, Nanning 530004, China.
  • Haiming Tuo
    Guangxi University, College of Computer, Electronics and Information, Nanning, 530004, China.
  • Chaohong Tan
    Guangxi Zhuang Autonomous Region Information Center, Guangxi Key Laboratory of Digital Infrastructure, Nanning, 530201, China.

Keywords

No keywords available for this article.