A spatial-spectral fusion convolutional transformer network with contextual multi-head self-attention for hyperspectral image classification.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Convolutional neural networks (CNNs) can effectively extract local features, while Vision Transformer excels at capturing global features. Combining these two networks to enhance the classification performance of hyperspectral images (HSI) has garnered significant attention. However, most existing fusion methods introduce inductive biases for the Transformer by directly connecting convolutional modules and Transformer encoders for feature extraction but rarely enhance the Transformer's ability to extract local contextual information through convolutional embedding. In this paper, we propose a spatial-spectral fusion convolutional Transformer method (SSFCT) with contextual multi-head self-attention (CMHSA) for HSI classification. Specifically, we first designed a local feature aggregation (LFA) module that utilizes a three-branch convolution architecture and attention layers to extract and enhance local spatial-spectral fusion features. Then, a novel CMHSA is built to extract interaction information of local contextual features through integrating static and dynamic local contextual representations from 3D convolution and attention mechanisms, and the CMHSA is integrated into the devised dual-branch spatial-spectral convolutional transformer (DSSCT) module to simultaneously capture global-local associations in both spatial and spectral domains. Finally, the attention feature fusion (AFF) module is proposed to fully obtain global-local spatial-spectral comprehensive features. Extensive experiments on five HSI datasets - Indian Pines, Salinas Valley, Houston2013, Botswana, and Yellow River Delta - outperform state-of-the-art methods, achieving overall accuracies of 98.03%, 99.68%, 98.65%, 97.97%, and 89.43%, respectively, showcasing its effectiveness for HSI classification.

Authors

  • Wuli Wang
    College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao, 266580, China. Electronic address: wangwuli@upc.edu.cn.
  • Qi Sun
    Department of Orthopedics, Shanghai Tenth People's Hospital, Tongji University, School of Medicine, Shanghai, 200072, P.R.China.
  • Li Zhang
    Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China.
  • Peng Ren
  • Jianbu Wang
    Lab of Marine Physics and Remote Sensing, First Institute of Oceanography, Ministry of Natural Resources, Qingdao, 266061, China. Electronic address: wangjianbu@fio.org.cn.
  • Guangbo Ren
    Lab of Marine Physics and Remote Sensing, First Institute of Oceanography, Ministry of Natural Resources, Qingdao, 266061, China; Qingdao Key Laboratory of Spatial Information Dynamic Perception and Smart Cartography in the Yellow River Basin (Shandong), Qingdao, 266061, China. Electronic address: renguangbo@fio.org.cn.
  • Baodi Liu