Computer aided co-articulation model based on Magnetic Resonance Images

Magnetic Resonance Imaging technique makes it possible to measure the motion of tissues in our body organs more clearly than other medical imaging techniques. The aim of this paper is to build a co-articulatory model based on Magnetic Resonance Images (MRI). This work is a blend of various emerging...

Full description

Saved in:
Bibliographic Details
Published in:2011 International Conference on Recent Trends in Information Technology pp. 707 - 711
Main Authors: Balasaranya, K., Rathinavelu, A.
Format: Conference Proceeding
Language:English
Published: IEEE 01.06.2011
Subjects:
ISBN:9781457705885, 1457705885
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Magnetic Resonance Imaging technique makes it possible to measure the motion of tissues in our body organs more clearly than other medical imaging techniques. The aim of this paper is to build a co-articulatory model based on Magnetic Resonance Images (MRI). This work is a blend of various emerging technologies such as computer vision based visualization technologies, cognitive science, medical science, speech recognition. The sounds of human speech can be combined in many ways, and the associated articulator movements vary as the kinematic context changes. This kinematic variation, known as co-articulation is one of the most pervasive characteristics of speech production. Visualization of co-articulatory effects involved in the speech production will lead to a better understanding of the speech production process. MRI video obtained (from the subject AR) during the co-articulation of Tamil phonemes has been incorporated as input and processed to envision the movements of the key articulators involved in the speech production process. The Region of Interest for the articulators such as jaw, tongue, lower lip, and upper lip were obtained. The motion parameters for individual articulators and their positions in subsequent frames are estimated using Block matching algorithm. Estimated motion parameters are visualised and then reproduced. This system can act as an efficient tool to control the place of articulation visually to aid second language learners and also for the people suffering from mis-articulation to learn the correct method of articulation.
ISBN:9781457705885
1457705885
DOI:10.1109/ICRTIT.2011.5972320