AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Speech

Showing 211 to 220 of 336 articles

Clear Filters

An Investigation of Speech Features, Plant System Alarms, and Operator-System Interaction for the Classification of Operator Cognitive Workload During Dynamic Work.

Human factors
OBJECTIVE: To investigate speech features, human-machine alarms, and operator-system interaction for the estimation of cognitive workload in full-scale realistic simulated scenarios.

Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition.

Sensors (Basel, Switzerland)
Speech emotion recognition (SER) classifies emotions using low-level features or a spectrogram of an utterance. When SER methods are trained and tested using different datasets, they have shown performance reduction. Cross-corpus SER research identif...

Inner speech.

Wiley interdisciplinary reviews. Cognitive science
Inner speech travels under many aliases: the inner voice, verbal thought, thinking in words, internal verbalization, "talking in your head," the "little voice in the head," and so on. It is both a familiar element of first-person experience and a psy...

Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features.

Sensors (Basel, Switzerland)
Artificial intelligence (AI) and machine learning (ML) are employed to make systems smarter. Today, the speech emotion recognition (SER) system evaluates the emotional state of the speaker by investigating his/her speech signal. Emotion recognition i...

Linear versus deep learning methods for noisy speech separation for EEG-informed attention decoding.

Journal of neural engineering
OBJECTIVE: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-st...

Evaluation of Hyperparameter Optimization in Machine and Deep Learning Methods for Decoding Imagined Speech EEG.

Sensors (Basel, Switzerland)
Classification of electroencephalography (EEG) signals corresponding to imagined speech production is important for the development of a direct-speech brain-computer interface (DS-BCI). Deep learning (DL) has been utilized with great success across s...

Characterizing soundscapes across diverse ecosystems using a universal acoustic feature set.

Proceedings of the National Academy of Sciences of the United States of America
Natural habitats are being impacted by human pressures at an alarming rate. Monitoring these ecosystem-level changes often requires labor-intensive surveys that are unable to detect rapid or unanticipated environmental changes. Here we have developed...

Brain-optimized extraction of complex sound features that drive continuous auditory perception.

PLoS computational biology
Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match ...

Speech Quality Feature Analysis for Classification of Depression and Dementia Patients.

Sensors (Basel, Switzerland)
Loss of cognitive ability is commonly associated with dementia, a broad category of progressive brain diseases. However, major depressive disorder may also cause temporary deterioration of one's cognition known as pseudodementia. Differentiating a tr...

Deep learning-based smart speaker to confirm surgical sites for cataract surgeries: A pilot study.

PloS one
Wrong-site surgeries can occur due to the absence of an appropriate surgical time-out. However, during a time-out, surgical participants are unable to review the patient's charts due to their aseptic hands. To improve the conditions in surgical time-...