Visual Articulatory Feedback

Period of Performance: 09/01/2000 - 03/31/2001

$135K

Phase 1 SBIR

Recipient Firm

Speech Technology/applied Research Corp.
Bedford, MA 01730
Principal Investigator

Abstract

A system providing visual feedback of articulation for people practicing English pronunciation would be useful to several populations. The proposed visual articulatory feedback system (VAF) will be based on articulatory x-ray microbeam data obtained in the last few years. VAF differs from current feedback systems based on automatic speech recognition (ASR) technology by its reliance on these data. Further, VAF does not require phonetic categorization to infer articulation, unlike ASR-based feedback. These factors will enable VAF to provide finer- grained articulatory feedback than current systems. VAF will also allow feedback for nonstandard pronunciations, such as those of dysarthric, deaf, and non-native speakers. The system will provide feedback with three planes of view, as well as a pseudopalatogram showing the teeth and palate. Given a student's speech sample, each view will show the corresponding articulator positions compared to the target. Thus, it will indicate the articulatory changes needed to produce better pronunciation. In Phase I, the research will focus on the difficult /ee/-/ih/ vowel distinction. In a multiple-subject case-study design, we will test the prototype against an existing auditory-feedback system. The two systems will be compared for efficiency and effectiveness in training deaf, dysarthric, and non-native speakers. PROPOSED COMMERCIAL APPLICATIONS: Potential applications include both self-paced and clinician-directed training courses for correction and maintenance of pronunciation. This applies particularly to adult populations who are deaf, are non-native speakers of English, or are diagnosed with a degenerative (dysarthria- producing) neuropathology such as Parkinsons disease or amyotrophic lateral sclerosis.