AIMC Topic: Phonetics

Clear Filters Showing 21 to 30 of 40 articles

Encoding of Articulatory Kinematic Trajectories in Human Speech Sensorimotor Cortex.

Neuron
When speaking, we dynamically coordinate movements of our jaw, tongue, lips, and larynx. To investigate the neural mechanisms underlying articulation, we used direct cortical recordings from human sensorimotor cortex while participants spoke natural ...

An analysis of the influence of deep neural network (DNN) topology in bottleneck feature based language recognition.

PloS one
Language recognition systems based on bottleneck features have recently become the state-of-the-art in this research field, showing its success in the last Language Recognition Evaluation (LRE 2015) organized by NIST (U.S. National Institute of Stand...

Perspectives on Speech Timing: Coupled Oscillator Modeling of Polish and Finnish.

Phonetica
This stud y was ai med at analyzing empirical duration data for Polish spoken at different tempos using an updated version of the Coupled Oscillator Model of speech timing and rhythm variability (O'Dell and Nieminen, 1999, 2009). We use Bayesian infe...

Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces.

PLoS computational biology
Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligi...

What can Neighbourhood Density effects tell us about word learning? Insights from a connectionist model of vocabulary development.

Journal of child language
In this paper, we investigate the effect of neighbourhood density (ND) on vocabulary size in a computational model of vocabulary development. A word has a high ND if there are many words phonologically similar to it. High ND words are more easily lea...

Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity.

PloS one
At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. H...

Using articulatory feature detectors in progressive networks for multilingual low-resource phone recognitiona).

The Journal of the Acoustical Society of America
Systems inspired by progressive neural networks, transferring information from end-to-end articulatory feature detectors to similarly structured phone recognizers, are described. These networks, connecting the corresponding recurrent layers of pre-tr...

Evaluating the consistency of lenition measures: Neural networks' posterior probability, intensity velocity, and duration.

The Journal of the Acoustical Society of America
Predictions of gradient degree of lenition of voiceless and voiced stops in a corpus of Argentine Spanish are evaluated using three acoustic measures (minimum and maximum intensity velocity and duration) and two recurrent neural network (Phonet) meas...

Decision Tree Versus Linear Support Vector Machine Classifier in the Screening of Medial Speech Sounds: A Quest for a Sound Rationale.

Studies in health technology and informatics
This paper describes the latest development in the classification stage of our Speech Sound Disorder (SSD) Screening algorithm and presents the results achieved by using two classifier models: the Classification and Regression Tree (CART)-based model...

Japanese Braille Translation Using Deep Learning - Conversion from Phonetic Characters (Kana) to Homonymic Characters (Kanji).

Studies in health technology and informatics
A blind student writes and submits reports in Braille word processor, which is difficult for teachers to read. This study's purpose is to make a translator from Braille into mixed Kana-Kanji sentences for such teachers. Because Kanji has homonyms, it...