DMMAN: A two-stage audio-visual fusion framework for sound separation and event localization.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Videos are used widely as the media platforms for human beings to touch the physical change of the world. However, we always receive the mixed sound from the multiple sound objects, and cannot distinguish and localize the sounds as the separate entities in videos. In order to solve this problem, a model named the Deep Multi-Modal Attention Network (DMMAN), is established to model the unconstrained video datasets for further finishing the sound source separation and event localization tasks in this paper. Based on the multi-modal separator and multi-modal matching classifier module, our model focuses on the sound separation and modal synchronization problems using two stage fusion of the sound and visual features. To link the multi-modal separator and multi-modal matching classifier modules, the regression and classification losses are employed to build the loss function of the DMMAN. The estimated spectrum masks and attention synchronization scores calculated by the DMMAN can be easily generalized to the sound source and event localization tasks. The quantitative experimental results show the DMMAN not only separates the high quality of the sound sources evaluated by Signal-to-Distortion Ratio and Signal-to-Interference Ratio metrics, but also is suitable for the mixed sound scenes that are never heard jointly. Meanwhile, DMMAN achieves better classification accuracy than other contrast baselines for the event localization tasks.

Authors

  • Ruihan Hu
    Guangdong Key Laboratory of Modern Control Technology, Guangdong Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou 510070, Guangdong Province, China.
  • Songbing Zhou
    Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangdong Key Laboratory of Modern Control Technology, Guangzhou, China. Electronic address: sb.zhou@giim.ac.cn.
  • Zhi Ri Tang
    School of Physics and Technology, Wuhan University, China. Electronic address: Gerin.Tang@my.cityu.edu.hk.
  • Sheng Chang
  • Qijun Huang
  • Yisen Liu
    Guangdong Key Laboratory of Modern Control Technology, Guangdong Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou 510070, Guangdong Province, China.
  • Wei Han
    Department of Pharmacology, The Key Laboratory of Neural and Vascular Biology, The Key Laboratory of New Drug Pharmacology and Toxicology, Ministry of Education, Collaborative Innovation Center of Hebei Province for Mechanism, Diagnosis and Treatment of Neuropsychiatric Diseases, Hebei Medical University, Shijiazhuang, Hebei, China.
  • Edmond Q Wu
    Department of Automation, Shanghai Jiao Tong University, Shanghai, China. Electronic address: Edmondqwu@gmail.com.