IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 49  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
SIP, SP, EA, IPSJ-SLP [detail] 2024-02-29
10:30
Okinawa
(Primary: On-site, Secondary: Online)
Multi-task learning with age information model for highly accurate elderly speech recognition.
Shine Takumi, Kinouchi Takahiro, Wakabayashi Yukoh, Kitaoka Norihide (TUT) EA2023-64 SIP2023-111 SP2023-46
The speech recognition of the elderly is less accurate, especially in smart speaker speech recognition, due to aging-rel... [more] EA2023-64 SIP2023-111 SP2023-46
pp.19-24
EMM, EA, ASJ-H 2023-11-23
13:00
Toyama   [Poster Presentation] **
, (**)
As a study of speech intelligibility estimation methods using speech recognition, we simulated a subjective evaluation t... [more] EA2023-45 EMM2023-76
pp.93-97
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-23
13:50
Tokyo
(Primary: On-site, Secondary: Online)
[Poster Presentation] Generation of colored subtitle images based on emotional information of speech utterances
Fumiya Nakamura (Kobe Univ.), Ryo Aihara (Mitsubishi Electric), Ryoichi Takashima, Tetsuya Takiguchi (Kobe Univ.), Yusuke Itani (Mitsubishi Electric) SP2023-11
Conventional automatic subtitle generation systems based on speech recognition do not take into account paralinguistic i... [more] SP2023-11
pp.54-59
SIS 2023-03-03
11:10
Chiba Chiba Institute of Technology
(Primary: On-site, Secondary: Online)
Investigation of introducing data augmentation methods to improve speech enhancement performance
Reito Kasuga, Yosuke Sugiura, Nozomiko Yasui, Tetsuya Shimamura (Saitama Univ.) SIS2022-52
The field of speech enhancement has been extensively researched worldwide, and many speech enhancement methods have been... [more] SIS2022-52
pp.64-69
SP, IPSJ-MUS, IPSJ-SLP [detail] 2022-06-17
15:00
Online Online Representation and analytical normalization for vocal-tract-length transformation by group theory
Atsushi Miyashita, Tomoki Toda (Nagoya Univ) SP2022-11
In automatic speech recognition, a recognition result should be invariant with respect to acoustic changes caused by dif... [more] SP2022-11
pp.41-46
SP, IPSJ-MUS, IPSJ-SLP [detail] 2022-06-18
13:00
Online Online [Poster Presentation] Proposal of Speech Content Conversion and the Initial Trial: Conversion of Linguistic Information Depending on Situations
Kohei Takita, Saizo Aoyagi, Tatsunori Hirai (Komazawa Univ.) SP2022-19
It is important to speak dialects, honorifics, and simple words for listeners and the environment in order to smooth com... [more] SP2022-19
pp.82-87
SP, IPSJ-SLP, IPSJ-MUS 2021-06-18
15:00
Online Online Protection method with audio processing against Audio Adversarial Example
Taisei Yamamoto, Yuya Tarutani, Yukinobu Fukusima, Tokumi Yokohira (Okayama Univ) SP2021-4
Machine learning technology has improved the recognition accuracy of voice recognition, and demand for voice recognition... [more] SP2021-4
pp.19-24
SP, IPSJ-SLP, IPSJ-MUS 2021-06-19
09:30
Online Online [Invited Talk] Toward a Unification of Various Speech Processing Tasks Based on End-to-End Neural networks
Shinji Watanabe (CMU) SP2021-8
This presentation will introduce the recent progress of speech processing technologies based on end-to-end neural networ... [more] SP2021-8
p.38
WIT, SP 2019-10-27
10:30
Kagoshima Daiichi Institute of Technology A Method to Reduce Ambiguity in Identifying the Muscle Activation Time of Each EMG Channel in Isolated Inaudible Single Syllable Recognition
Hidetoshi Nagai (KIT) SP2019-32 WIT2019-31
In inaudible speech recognition using surface EMG, consonant recognition is one of the difficult problems. When phonemes... [more] SP2019-32 WIT2019-31
pp.87-92
SP 2019-01-26
16:25
Ishikawa Kanazawa-Harmonie [Fellow Memorial Lecture] Machine, human and sound communication
Akinori Ito (Tohoku Univ.) SP2018-55
Speech is the most important modality for human-human communication. From invention of electrical speech communication, ... [more] SP2018-55
p.19
SP 2019-01-27
11:30
Ishikawa Kanazawa-Harmonie Multimodal Data Augmentation for Visual Speech Recognition using Deep Canonical Correlation Analysis
Masaki Shimonishi, Satoshi Tamura, Satoru Hayamizu (Gifu University) SP2018-60
This paper proposes ta new data augmentation strategy for deep learning, in which feature vectors in one modality can be... [more] SP2018-60
pp.41-45
HCGSYMPO
(2nd)
2017-12-13
- 2017-12-15
Ishikawa THE KANAZAWA THEATRE Utilization of Arc Type Single-Channel Microphone Array to Display Daily Conversation
Tomoki Kurahashi (Univ. Tsukuba), Keiichi Zempo, Koichi Mizutani, Naoto Wakatsuki (Univ. of Tsukuba)
We have been studying a portable real-world caption system using transparent Head-Mounted Display to ensure information ... [more]
WIT 2016-03-05
09:30
Ibaraki Tusukuba Univ. of Tech.(Tsukuba) Hearing Aid with Lip Reading -- Speech Enhancement using Vowel Estimation --
Yuzuru Iinuma, Tetsuya Matsumoto (Nagoya Univ.), Yoshinori Takeuchi (Daido Univ.), Hiroaki Kudo, Noboru Ohnishi (Nagoya Univ.) WIT2015-98
Under highly noisy environments such as construction sites and cocktail parties, it is difficult for not only humans but... [more] WIT2015-98
pp.53-58
WIT, HI-SIGACI 2015-12-08
15:00
Tokyo AIST Tokyo Waterfront Supporting Development of Speech Visualization Mode for Deaf and Hard of Hearing People Support
Yusuke Toba, Shinsuke Matsumoto, Sachio Saiki, Masahide Nakamura (Kobe Univ.), Tomohito Uchino, Tomohiro Yokoyama, Yasuhiro Takebayashi (School for the Deaf, University of Tsukuba) WIT2015-63
We have proposed a multi-modal visualization application in order to support deaf and hard of hearing people in understa... [more] WIT2015-63
pp.1-6
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2015-12-02
16:30
Aichi Nagoya Inst of Tech. Distant-talking speech recognition by reverberation-aware denoising autoencoder
Yuma Ueda (Shizuoka Univ.), Longbiao Wang (Nagaoka Univ.), Atsuhiko Kai (Shizuoka Univ.) SP2015-77
In the distant-talking speech recognition, it is essential to deal with the noise and reverberation.Denoising autoencode... [more] SP2015-77
pp.55-60
LOIS, ISEC, SITE 2015-11-06
15:55
Kanagawa Kanagawa Univ. The threshold value design of recognition score considering the prior probability in a quiz speech recognition system
Hiroyuki Nishi, Kimurra Yoshimasa, Kakinoki Toshio (Sojo Univ.) ISEC2015-45 SITE2015-32 LOIS2015-39
As in the information input in the smart phone and car navigation systems, speech recognition is often used in applicati... [more] ISEC2015-45 SITE2015-32 LOIS2015-39
pp.59-65
HCS, HIP, HI-SIGCOASTER [detail] 2015-05-20
14:50
Okinawa Okinawa Industry Support Center Continuous Inaudible Vowels Recognition Using Peaks of Lip Shape Transition based on Surface EMG
Nao Kurogi, Hidetoshi Nagai, Teigo Nakamura (Kyutech) HCS2015-37 HIP2015-37
We research on inaudible speech recognition with surface EMGs. On a natural speech, we can't expect stability of charact... [more] HCS2015-37 HIP2015-37
pp.243-248
NC, MBE 2015-03-17
09:55
Tokyo Tamagawa University Silent Season BCI -- Performance of HMM --
Takashi Ito, Hiromi Yamaguchi (KIT), Ayaka Yamaguchi (Hitachi systems), Toshimasa Yamazaki (KIT), Shin'ichi Fukuzumi (NEC), Takahiro Yamanoi (Hokkaido Gakuen University) MBE2014-130 NC2014-81
The purpose of this study is to generalize our previous SSBCI (Silent Speech Brain-Computer Interface) study proposed in... [more] MBE2014-130 NC2014-81
pp.81-84
SIP, EA, SP 2015-03-02
16:10
Okinawa   [Special Invited Talk] Intermediate representation for statistical pattern recognition
Koichi Shinoda (TokyoTech) EA2014-85 SIP2014-126 SP2014-148
In Deep learning, which has recently seen its boom, it is still not clear how to optimize multi-layer structures. To sol... [more] EA2014-85 SIP2014-126 SP2014-148
p.73
NLC, IPSJ-NL, SP, IPSJ-SLP, JSAI-SLUD
(Joint) [detail]
2014-12-16
11:00
Kanagawa Tokyo Institute of Technology (Suzukakedai Campus) Noise robust speech recognition by non-negative matrix factorization using GMM clustering in MFCC domain
Kentaro Fujigaki, Yosuke Kashiwagi, Daisuke Saito, Nobuaki Minematsu, Keikichi Hirose (Univ. of Tokyo) SP2014-113
Exemplar-based feature enhancement by non-negative matrix factorization (NMF) was proposed for noise-robust speech recog... [more] SP2014-113
pp.69-74
 Results 1 - 20 of 49  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan