Paper Abstract and Keywords |
Presentation |
2016-03-05 09:30
Hearing Aid with Lip Reading
-- Speech Enhancement using Vowel Estimation -- Yuzuru Iinuma, Tetsuya Matsumoto (Nagoya Univ.), Yoshinori Takeuchi (Daido Univ.), Hiroaki Kudo, Noboru Ohnishi (Nagoya Univ.) WIT2015-98 |
Abstract |
(in Japanese) |
(See Japanese page) |
(in English) |
Under highly noisy environments such as construction sites and cocktail parties, it is difficult for not only humans but also computers to understand speeches only by hearing. It is, however, possible to improve speech recognition accuracy even in such environments by using visual information, just as humans can do so. In this paper we try to improve speech recognition accuracy by adding the image information (especially information of the lips), and implementing the method of lip reading to capture the lip movements.
Then, we propose a hearing aid system which estimates the vowels from the lip movement, and enhances speech by applying a vowel-filter matched to the estimated vowel. This system is expected to perform high speech recognition accuracy not-depending on the type of noises. The proposed system consists of three parts: face-lip feature extraction, vowel estimation by the extracted features, and speech enhancement. In the face-lip feature extraction, feature points are acquired by using the Active Appearance Model. The vowel is estimated from face images by using a multilayer perceptron. Finally, input speech corrupted with noise is enhanced by applying a Butterworth filter matched to the estimated vowel.
We conducted experiments to evaluate the system performances.
In the experiment of vowel recognition from images, F value is low as 0.383 at the best. This is caused by a problem in obtaining the data, and it said that we have to prepare a large amount of more natural speech data. In the experiment of voice recognition, recognition rate by the machine under SNR 0 dB (street noise) is 40 % when applying the speech enhancement filter, while it is 48 % when not applying. Thus the application of the enhancement filter results low accuracy in the machine recognition. This suggests that the proposed method is not suitable for recognition by the machine. In the experiment of monophonic vowel discrimination by humans, however, the recognition rate under SNR -15 dB (street noise) is 90 % when applying the speech enhancement filter , while it is 40 % when not applying. There is a significant accuracy improvement of 50 %. Similarly in the case of SNR -15dB using cocktail party noise, the recognition rate after application of the speech enhancement filter is 96 %, while it is 34 % before application.
It is said that application of the vowel enhancement filter can improve accuracy regardless of the type of noises. |
Keyword |
(in Japanese) |
(See Japanese page) |
(in English) |
Lip Reading / Hearing Aid / Speech Enhancement / / / / / |
Reference Info. |
IEICE Tech. Rep., vol. 115, no. 491, WIT2015-98, pp. 53-58, March 2016. |
Paper # |
WIT2015-98 |
Date of Issue |
2016-02-26 (WIT) |
ISSN |
Print edition: ISSN 0913-5685 Online edition: ISSN 2432-6380 |
Copyright and reproduction |
All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
Download PDF |
WIT2015-98 |
Conference Information |
Committee |
WIT |
Conference Date |
2016-03-04 - 2016-03-05 |
Place (in Japanese) |
(See Japanese page) |
Place (in English) |
Tusukuba Univ. of Tech.(Tsukuba) |
Topics (in Japanese) |
(See Japanese page) |
Topics (in English) |
Hearing / Visually impaired person support technology, etc. |
Paper Information |
Registration To |
WIT |
Conference Code |
2016-03-WIT |
Language |
Japanese |
Title (in Japanese) |
(See Japanese page) |
Sub Title (in Japanese) |
(See Japanese page) |
Title (in English) |
Hearing Aid with Lip Reading |
Sub Title (in English) |
Speech Enhancement using Vowel Estimation |
Keyword(1) |
Lip Reading |
Keyword(2) |
Hearing Aid |
Keyword(3) |
Speech Enhancement |
Keyword(4) |
|
Keyword(5) |
|
Keyword(6) |
|
Keyword(7) |
|
Keyword(8) |
|
1st Author's Name |
Yuzuru Iinuma |
1st Author's Affiliation |
Nagoya University (Nagoya Univ.) |
2nd Author's Name |
Tetsuya Matsumoto |
2nd Author's Affiliation |
Nagoya University (Nagoya Univ.) |
3rd Author's Name |
Yoshinori Takeuchi |
3rd Author's Affiliation |
Daido University (Daido Univ.) |
4th Author's Name |
Hiroaki Kudo |
4th Author's Affiliation |
Nagoya University (Nagoya Univ.) |
5th Author's Name |
Noboru Ohnishi |
5th Author's Affiliation |
Nagoya University (Nagoya Univ.) |
6th Author's Name |
|
6th Author's Affiliation |
() |
7th Author's Name |
|
7th Author's Affiliation |
() |
8th Author's Name |
|
8th Author's Affiliation |
() |
9th Author's Name |
|
9th Author's Affiliation |
() |
10th Author's Name |
|
10th Author's Affiliation |
() |
11th Author's Name |
|
11th Author's Affiliation |
() |
12th Author's Name |
|
12th Author's Affiliation |
() |
13th Author's Name |
|
13th Author's Affiliation |
() |
14th Author's Name |
|
14th Author's Affiliation |
() |
15th Author's Name |
|
15th Author's Affiliation |
() |
16th Author's Name |
|
16th Author's Affiliation |
() |
17th Author's Name |
|
17th Author's Affiliation |
() |
18th Author's Name |
|
18th Author's Affiliation |
() |
19th Author's Name |
|
19th Author's Affiliation |
() |
20th Author's Name |
|
20th Author's Affiliation |
() |
Speaker |
Author-1 |
Date Time |
2016-03-05 09:30:00 |
Presentation Time |
25 minutes |
Registration for |
WIT |
Paper # |
WIT2015-98 |
Volume (vol) |
vol.115 |
Number (no) |
no.491 |
Page |
pp.53-58 |
#Pages |
6 |
Date of Issue |
2016-02-26 (WIT) |
|