Robust recursive absolute value inequalities discriminant analysis with sparseness.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

In this paper, we propose a novel absolute value inequalities discriminant analysis (AVIDA) criterion for supervised dimensionality reduction. Compared with the conventional linear discriminant analysis (LDA), the main characteristics of our AVIDA are robustness and sparseness. By reformulating the generalized eigenvalue problem in LDA to a related SVM-type "concave-convex" problem based on absolute value inequalities loss, our AVIDA is not only more robust to outliers and noises, but also avoids the SSS problem. Moreover, the additional L1-norm regularization term in the objective makes sure sparse discriminant vectors are obtained. A successive linear algorithm is employed to solve the proposed optimization problem, where a series of linear programs are solved. The superiority of our AVIDA is supported by experimental results on artificial examples as well as benchmark image databases.

Authors

  • Chun-Na Li
    Zhijiang College, Zhejiang University of Technology, Hangzhou, 310024, PR China.
  • Zeng-Rong Zheng
    Zhijiang College, Zhejiang University of Technology, Hangzhou, 310024, PR China.
  • Ming-Zeng Liu
    School of Mathematics and Physics Science, Dalian University of Technology at Panjin, Dalian, 124221, PR China.
  • Yuan-Hai Shao
    School of Economics and Management, Hainan University, Haikou, 570228, PR China. Electronic address: shaoyuanhai21@163.com.
  • Wei-Jie Chen
    Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA.