Presentation 2013-02-02
Eye Motion Input Based Speech Synthesis Interface for Communication Aids
Fuming FANG, Takahiro SHINOZAKI, Yasuo HORIUCHI, Shingo KUROIWA, Sadaoki FURUI, Toshimitsu MUSHA,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) In order to provide an efficient means of communication for those who cannot move muscles of their whole body except eyes due to amyotrophic lateral sclerosis (ALS), we are studying a speech synthesis interface based on electrooculogram (EOG) input The system consists of an EOG input module, an eye motion recognizer, and a speech synthesizer In this paper, we improve the EOG input based eye motion recognizer applying speech recognition techniques In our previous system, a hidden Markov model (HMM) based bi eye-motion model was used However, it was not enough to effectively model the context effects of eye motions In this study, we investigate using a tied-state tri eye-motion model Moreover, an N-gram model is integrated to the recognition system In the experiment, it is shown that 96 2% of character recognition accuracy is obtained by using the tn eye-motion model whereas it is 84 3% and 89 1% for mono and bi eye-motion models, respectively By using a character 3-gram model in combination with the tri eye motion-model, the highest character accuracy of 97 3% has been obtained
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Electrooculogram / Hidden Markov model / N-gram / Speech synthesis / Communication aids
Paper # WIT2012-38
Date of Issue

Conference Information
Committee WIT
Conference Date 2013/1/26(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Well-being Information Technology(WIT)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Eye Motion Input Based Speech Synthesis Interface for Communication Aids
Sub Title (in English)
Keyword(1) Electrooculogram
Keyword(2) Hidden Markov model
Keyword(3) N-gram
Keyword(4) Speech synthesis
Keyword(5) Communication aids
1st Author's Name Fuming FANG
1st Author's Affiliation Chiba University()
2nd Author's Name Takahiro SHINOZAKI
2nd Author's Affiliation Chiba University
3rd Author's Name Yasuo HORIUCHI
3rd Author's Affiliation Chiba University
4th Author's Name Shingo KUROIWA
4th Author's Affiliation Chiba University
5th Author's Name Sadaoki FURUI
5th Author's Affiliation Tokyo Institute of Technology
6th Author's Name Toshimitsu MUSHA
6th Author's Affiliation Brain Functions Laboratory
Date 2013-02-02
Paper # WIT2012-38
Volume (vol) vol.112
Number (no) 426
Page pp.pp.-
#Pages 6
Date of Issue