Probabilistic Safety Regions via Finite Families of Adjustable Classifiers.

Journal: IEEE transactions on neural networks and learning systems
Published Date:

Abstract

The supervised classification recognizes patterns in the data to separate classes of behaviors. Canonical solutions contain misclassification errors that are intrinsic to the numerical approximating nature of machine learning (ML). The data analyst may minimize the classification error on a class at the expense of increasing the error of the other classes. The error control of such a design phase is often done in a heuristic manner. In this article, it is key to develop theoretical foundations capable of providing probabilistic certifications to the obtained classifiers. In this perspective, we introduce the concept of probabilistic safety region to describe a subset of the input space in which the number of misclassified instances is probabilistically controlled. The notion of adjustable classifiers, a special class of classifiers that share the property of being controllable by a scalar parameter, is then exploited to link the tuning of ML with error control. Several tests and examples corroborate the approach. They are provided through the synthetic data in order to highlight all the steps involved, as well as notable benchmark datasets and a smart mobility application.

Authors

  • Alberto Carlevaro
    National Research Council of Italy (CNR), Institute of Electronics, Information Engineering and Telecommunications (IEIIT), Italy.
  • Teodoro Alamo
  • Fabrizio Dabbene
  • Maurizio Mongelli
    Consiglio Nazionale delle Ricerche (CNR), Institute of Electronics, Information Engineering and Telecommunications (IEIIT), 16149 Genoa, Italy.

Keywords

No keywords available for this article.