AIMC Topic: Speech Perception

Clear Filters Showing 71 to 80 of 114 articles

Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

Neural networks : the official journal of the International Neural Network Society
Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HC...

Connectionist perspectives on language learning, representation and processing.

Wiley interdisciplinary reviews. Cognitive science
The field of formal linguistics was founded on the premise that language is mentally represented as a deterministic symbolic grammar. While this approach has captured many important characteristics of the world's languages, it has also led to a tende...

Explicit estimation of magnitude and phase spectra in parallel for high-quality speech enhancement.

Neural networks : the official journal of the International Neural Network Society
Phase information has a significant impact on speech perceptual quality and intelligibility. However, existing speech enhancement methods encounter limitations in explicit phase estimation due to the non-structural nature and wrapping characteristics...

CNNs improve decoding of selective attention to speech in cochlear implant users.

Journal of neural engineering
. Understanding speech in the presence of background noise such as other speech streams is a difficult problem for people with hearing impairment, and in particular for users of cochlear implants (CIs). To improve their listening experience, auditory...

fNIRS experimental study on the impact of AI-synthesized familiar voices on brain neural responses.

Scientific reports
With the advancement of artificial intelligence (AI) speech synthesis technology, its application in personalized voice services and its potential role in emotional comfort have become research focal points. This study aims to explore the impact of A...

[Neural network for auditory speech enhancement featuring feedback-driven attention and lateral inhibition].

Sheng wu yi xue gong cheng xue za zhi = Journal of biomedical engineering = Shengwu yixue gongchengxue zazhi
The processing mechanism of the human brain for speech information is a significant source of inspiration for the study of speech enhancement technology. Attention and lateral inhibition are key mechanisms in auditory information processing that can ...

Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.

Trends in hearing
Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highes...

Speech intelligibility prediction based on a physiological model of the human ear and a hierarchical spiking neural network.

The Journal of the Acoustical Society of America
A speech intelligibility (SI) prediction model is proposed that includes an auditory preprocessing component based on the physiological anatomy and activity of the human ear, a hierarchical spiking neural network, and a decision back-end processing b...

Using deep learning to improve the intelligibility of a target speaker in noisy multi-talker environments for people with normal hearing and hearing loss.

The Journal of the Acoustical Society of America
Understanding speech in noisy environments is a challenging task, especially in communication situations with several competing speakers. Despite their ongoing improvement, assistive listening devices and speech processing approaches still do not per...

Recovering speech intelligibility with deep learning and multiple microphones in noisy-reverberant situations for people using cochlear implants.

The Journal of the Acoustical Society of America
For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate these difficulties by enhancing speech in everyday listening environments. This ...