Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.

Authors

  • Xing Hu
    State Key Laboratory of Coal Combustion, Huazhong University of Science and Technology, 430074 Wuhan, Hubei, China.
  • Zhixuan Li
  • Lingkun Luo
    School of Aeronautics and Astronautics, Shanghai Jiao Tong University, No. 800, Dongchuang Road, Shanghai, 200240, China.
  • Hamid Reza Karimi
    Department of Engineering, Faculty of Technology and Science, University of Agder, N-4898 Grimstad, Norway.
  • Dawei Zhang