NLS: An accurate and yet easy-to-interpret prediction method.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Over the last years, the predictive power of supervised machine learning (ML) has undergone impressive advances, achieving the status of state of the art and super-human level in some applications. However, the employment rate of ML models in real-life applications is much slower than one would expect. One of the downsides of using ML solution-based technologies is the lack of user trust in the produced model, which is related to the black-box nature of these models. To leverage the application of ML models, the generated predictions should be easy to interpret while maintaining a high accuracy. In this context, we develop the Neural Local Smoother (NLS), a neural network architecture that yields accurate predictions with easy-to-obtain explanations. The key idea of NLS is to add a smooth local linear layer to a standard network. We show experiments that indicate that NLS leads to a predictive power that is comparable to state-of-the-art machine learning models, but that at the same time is easier to interpret.

Authors

  • Victor Coscrato
    University College Cork, Cork, Ireland. Electronic address: vcoscrato@gmail.com.
  • Marco H A Inácio
    Budapest University of Technology and Economics, Budapest, Hungary. Electronic address: m@marcoinacio.com.
  • Tiago Botari
    University of São Paulo, São Carlos - SP, Brazil. Electronic address: tiagobotari@gmail.com.
  • Rafael Izbicki
    Centro de Ciências Exatas e de Tecnologia, Universidade Federal de São Carlos, São Carlos, Brasil.