Environmental sound classification using temporal-frequency attention based convolutional neural network.

Journal: Scientific reports
Published Date:

Abstract

Environmental sound classification is one of the important issues in the audio recognition field. Compared with structured sounds such as speech and music, the time-frequency structure of environmental sounds is more complicated. In order to learn time and frequency features from Log-Mel spectrogram more effectively, a temporal-frequency attention based convolutional neural network model (TFCNN) is proposed in this paper. Firstly, an experiment that is used as motivation in proposed method is designed to verify the effect of a specific frequency band in the spectrogram on model classification. Secondly, two new attention mechanisms, temporal attention mechanism and frequency attention mechanism, are proposed. These mechanisms can focus on key frequency bands and semantic related time frames on the spectrogram to reduce the influence of background noise and irrelevant frequency bands. Then, a feature information complementarity is formed by combining these mechanisms to more accurately capture the critical time-frequency features. In such a way, the representation ability of the network model can be greatly improved. Finally, experiments on two public data sets, UrbanSound 8 K and ESC-50, demonstrate the effectiveness of the proposed method.

Authors

  • Wenjie Mu
    College of Information Science and Engineering, Ocean University of China, Qingdao, China.
  • Bo Yin
    College of Chemistry and Chemical Engineering, Lanzhou University, Lanzhou, 730000 People's Republic of China.
  • Xianqing Huang
    Shenzhen Prevention and Treatment Center for Occupational Diseases, Shenzhen 518020, China.
  • Jiali Xu
    Pilot National Laboratory for Marine Science and Technology, Qingdao, China.
  • Zehua Du
    College of Information Science and Engineering, Ocean University of China, Qingdao, China.