Robust recursive absolute value inequalities discriminant analysis with sparseness.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Sep 1, 2017
Abstract
In this paper, we propose a novel absolute value inequalities discriminant analysis (AVIDA) criterion for supervised dimensionality reduction. Compared with the conventional linear discriminant analysis (LDA), the main characteristics of our AVIDA are robustness and sparseness. By reformulating the generalized eigenvalue problem in LDA to a related SVM-type "concave-convex" problem based on absolute value inequalities loss, our AVIDA is not only more robust to outliers and noises, but also avoids the SSS problem. Moreover, the additional L1-norm regularization term in the objective makes sure sparse discriminant vectors are obtained. A successive linear algorithm is employed to solve the proposed optimization problem, where a series of linear programs are solved. The superiority of our AVIDA is supported by experimental results on artificial examples as well as benchmark image databases.