Multimodal learning-based speech enhancement and separation, recent innovations, new horizons, challenges and real-world applications.

Journal: Computers in biology and medicine
PMID:

Abstract

With the increasing global prevalence of disabling hearing loss, speech enhancement technologies have become crucial for overcoming communication barriers and improving the quality of life for those affected. Multimodal learning has emerged as a powerful approach for speech enhancement and separation, integrating information from various sensory modalities such as audio signals, visual cues, and textual data. Despite substantial progress, challenges remain in synchronizing modalities, ensuring model robustness, and achieving scalability for real-time applications. This paper provides a comprehensive review of the latest advances in the most promising strategy, multimodal learning for speech enhancement and separation. We underscore the limitations of various methods in noisy and dynamic real-world environments and demonstrate how multimodal systems leverage complementary information from lip movements, text transcripts, and even brain signals to enhance performance. Critical deep learning architectures are covered, such as Transformers, Convolutional Neural Networks (CNNs), Graph Neural Networks (GNNs), and generative models like Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Diffusion Models. Various fusion strategies, including early and late fusion and attention mechanisms, are explored to address challenges in aligning and integrating multimodal inputs effectively. Furthermore, the paper explores important real-world applications in areas like automatic driver monitoring in autonomous vehicles, emotion recognition for mental health monitoring, augmented reality in interactive retail, smart surveillance for public safety, remote healthcare and telemedicine, and hearing assistive devices. Additionally, critical advanced procedures, comparisons, future challenges, and prospects are discussed to guide future research in multimodal learning for speech enhancement and separation, offering a roadmap for new horizons in this transformative field.

Authors

  • Rizwan Ullah
    Wireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, Thailand.
  • Shaohui Zhang
    School of Mechanical Engineering, Dongguan University of Technology, Dongguan 523808, China.
  • Muhammad Asif
    Department of Pharmacology, Faculty of Pharmacy, The Islamia University of Bahawalpur, Bahawalpur, Punjab, Pakistan.
  • Fazale Wahab
    Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, Anhui, PR China.