IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 41 - 60 of 407 [Previous]  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
SIP, BioX, IE, MI, ITE-IST, ITE-ME [detail] 2022-05-20
11:30
Kumamoto Kumamoto University Kurokami Campus
(Primary: On-site, Secondary: Online)
Implementation of a Lightweight Automatic Speech Recognition System at the Edge
Haotian Tan, Junichi Akita (Kanazawa Univ.)
Automatic speech recognition (ASR) on the cloud has been widely adopted and has demonstrated satisfactory performance. W... [more]
EA, SIP, SP, IPSJ-SLP [detail] 2022-03-01
12:45
Okinawa
(Primary: On-site, Secondary: Online)
Incorporating Acoustic and Textual Information for Language Modeling in Code-switching Speech Recognition
Roland Hartanto, Kuniaki Uto, Koichi Shinoda (TokyoTech) EA2021-73 SIP2021-100 SP2021-58
People who speak two or more languages tend to alternate the language when they are speaking. This particular phenomenon... [more] EA2021-73 SIP2021-100 SP2021-58
pp.56-63
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2021-12-02
15:20
Online Online improvement of multilingual speech emotion recognition by normalizing features using CRNN
Jinhai Qi, Motoyuki Suzuki (OIT) NLC2021-22 SP2021-43
In this research, a new multilingual emotion recognition method by normalizing features using CRNN has been proposed. We... [more] NLC2021-22 SP2021-43
pp.22-26
SP, IPSJ-SLP, IPSJ-MUS 2021-06-18
15:00
Online Online Protection method with audio processing against Audio Adversarial Example
Taisei Yamamoto, Yuya Tarutani, Yukinobu Fukusima, Tokumi Yokohira (Okayama Univ) SP2021-4
Machine learning technology has improved the recognition accuracy of voice recognition, and demand for voice recognition... [more] SP2021-4
pp.19-24
SP, IPSJ-SLP, IPSJ-MUS 2021-06-19
09:30
Online Online [Invited Talk] Toward a Unification of Various Speech Processing Tasks Based on End-to-End Neural networks
Shinji Watanabe (CMU) SP2021-8
This presentation will introduce the recent progress of speech processing technologies based on end-to-end neural networ... [more] SP2021-8
p.38
SP, IPSJ-SLP, IPSJ-MUS 2021-06-19
13:00
Online Online A Study on Error Correction for Improving the Accuracy of Acoustic Models
Saki Anazawa, Naofumi Aoki, Yoshinori Dobashi (Hokkaido Univ.) SP2021-12
People with ALS (amyotrophic lateral sclerosis) or dysarthria sometimes use their own voice for speech synthesis. In thi... [more] SP2021-12
pp.51-52
NS 2021-04-15
14:25
Online Online [Invited Talk] Speech recognition research moving forward with the development of deep learning technology
Hiromitsu Nishizaki (Univ. of Yamanashi) NS2021-7
In recent years, deep learning technology has been developing rapidly. Along with the development of deep learning, auto... [more] NS2021-7
pp.37-42
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
17:10
Online Online An investigation of rhythm-based speaker embeddings for phoneme duration modeling
Kenichi Fujita, Atsushi Ando, Yusuke Ijima (NTT) EA2020-77 SIP2020-108 SP2020-42
In this study, we propose a speaker embedding method suitable for modeling phoneme duration length for each individual i... [more] EA2020-77 SIP2020-108 SP2020-42
pp.103-108
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
17:35
Online Online [Short Paper] Comparison of End-to-End Models for Joint Speaker and Speech Recognition
Kak Soky (Kyoto Univ.), Sheng Li (NICT), Masato Mimura, Chenhui Chu, Tatsuya Kawahara (Kyoto Univ.) EA2020-78 SIP2020-109 SP2020-43
In this paper, we investigate the effectiveness of using speaker information on the performance of speaker-imbalanced au... [more] EA2020-78 SIP2020-109 SP2020-43
pp.109-113
MVE, IPSJ-CVIM 2021-01-21
15:50
Online Online Customer service training VR system with spoken voice traning
Toki Nishio, Soichiro Iida (Univ. of Tsukuba), Yuta Sano, Leow Chee Siang, Hiromitsu Nishizaki (Univ. of Yamanashi), Takehito Utsuro, Junichi Hoshino (Univ. of Tsukuba) MVE2020-32
A filler is a word that has no meaning in itself and is used to fill gaps in conversation. In customer service, this fil... [more] MVE2020-32
pp.13-16
CQ, CBE
(Joint)
2021-01-22
14:00
Online Online [Invited Lecture] A Study on Correlation between Automatic Speech Recognition Accuracy and Speech QoE
Masashi Yamashita, Hiroshi Matsunaga, Toyota Nishi (DOCOMO Technology) CQ2020-99
In recent years, as the quality of mobile networks has improved, it has become more critical to maintain and improve the... [more] CQ2020-99
pp.145-148
EA 2020-12-14
10:05
Online Online Speech Signal Detection Based on Bayesian Estimation by Observing Air-Conducted Speech under Existence of Surrounding Noise with Aid of Bone-Conducted Speech
Akira Ikuta, Hisako Orimoto (Prefectural Univ. of Hiroshima), Kouji Hasegawa (Hiroshima Prefectural Technology Research Inst.) EA2020-48
When applying speech recognition systems to actual circumstances such as inspection and maintenance operations in indust... [more] EA2020-48
pp.13-18
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2020-12-02
09:40
Online Online Fast End-to-End Speech Recognition with CTC and Mask Predict
Yosuke Higuchi (Waseda Univ.), Hirofumi Inaguma (Kyoto Univ.), Shinji Watanabe (JHU), Tetsuji Ogawa, Tetsunori Kobayashi (Waseda Univ.) NLC2020-13 SP2020-16
We present a fast non-autoregressive (NAR) end-to-end automatic speech recognition (E2E-ASR) framework, which generates ... [more] NLC2020-13 SP2020-16
pp.1-6
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2020-12-02
13:50
Online Online Multi-Modal Emotion Recognition by Integrating of Acoustic and Linguistic Features
Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) NLC2020-14 SP2020-17
In recent years, the advanced techique of deep learning has improved the performance of Speech Emotional Recognition as ... [more] NLC2020-14 SP2020-17
pp.7-12
SIS, IPSJ-AVM, ITE-3DMT [detail] 2020-06-04
14:00
Online Online An experimental comparison of CNN- and CRNN-CTC for automatic phrase speech recognition systems using a children's speech database
Yunzhe Wang, Yu Tian (Hokkaido Univ.), Yoshikazu Miyanaga (CIST), Hiroshi Tsutsui (Hokkaido Univ.) SIS2020-9
Children's speech recognition is still a challenging issue. In the case of children's speeches, the accuracy of conventi... [more] SIS2020-9
pp.49-54
SC 2020-03-16
10:30
Online Online Cloud Speech Recognition Process Management Method Based on Device Sensor Information
Yu Fujita, Isao Tazawa, Masaharu Ukeda (Hitachi) SC2019-38
In recent years, the use of voice User Interface (VUI) to make the service interactive using speech is widespread. When ... [more] SC2019-38
pp.23-28
ET 2020-03-07
10:35
Kagawa National Institute of Technology, Kagawa Collage
(Cancelled but technical report was issued)
Implementation and evaluation of language learning support module applying speech recognition
Yusuke Kawamura, Chunxiang Chen, Renfeng Hou (PUH) ET2019-87
The development of speech recognition and speech synthesis technology has been remarkable due to the development of deep... [more] ET2019-87
pp.63-67
SP, EA, SIP 2020-03-02
13:00
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
Data augmentation for ASR system by using locally time-reversed speech -- Temporal inversion of feature sequence --
Takanori Ashihara, Tomohiro Tanaka, Takafumi Moriya, Ryo Masumura, Yusuke Shinohara, Makio Kashino (NTT) EA2019-110 SIP2019-112 SP2019-59
Data augmentation is one of the techniques to mitigate overfitting and improve robustness against several acoustic varia... [more] EA2019-110 SIP2019-112 SP2019-59
pp.53-58
SP, EA, SIP 2020-03-02
15:45
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
Performance evaluation of distilling knowledge using encoder-decoder for CTC-based automatic speech recognition systems
Takafumi Moriya, Hiroshi Sato, Tomohiro Tanaka, Takanori Ashihara, Ryo Masumura, Yusuke Shinohara (NTT) EA2019-131 SIP2019-133 SP2019-80
We present a novel training approach for connectionist temporal classification (CTC) -based automatic speech recognition... [more] EA2019-131 SIP2019-133 SP2019-80
pp.175-180
SP, EA, SIP 2020-03-03
09:00
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
[Poster Presentation] Implementation of a high-accuracy method for automatic fluency scoring of spontaneous English utterances by Japanese learners
Ayano Yasukagawa, Shintaro Ando, Eisuke Konno, Zhenchao Lin, Yusuke Inoue, Daisuke Saito, Nobuaki Minematsu (UTokyo), Kazuya Saito (UCL) EA2019-134 SIP2019-136 SP2019-83
These days, many teachers claim importance of not native-likeness-based but intelligibility-based assessment of pronunci... [more] EA2019-134 SIP2019-136 SP2019-83
pp.189-194
 Results 41 - 60 of 407 [Previous]  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan