IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 31  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
EA, ASJ-H 2021-07-16
11:30
Online Online A study on the number of speech samples required for making acoustic models in tailor-made speech synthesis
Keigo Narita, Naofumi Aoki, Atsuhito Udo, Yoshinori Dobashi (Hokkaido Univ.) EA2021-16
In this study, we created speaker dependent acoustic models with varying numbers of samples, and confirmed differences i... [more] EA2021-16
pp.75-76
SP, IPSJ-SLP
(Joint)
2018-07-26
17:15
Shizuoka Sago-Royal-Hotel (Hamamatsu) Knowledge Distillation from Neural Network Based Acoustic Model based on Different Decision Tree
Takashi Fukuda, Samuel Thomas (IBM) SP2018-20
This paper proposes a method to transfer acoustic knowledge from teacher network with a different decision tree to a stu... [more] SP2018-20
pp.21-24
SIP, EA, SP, MI
(Joint) [detail]
2018-03-20
09:00
Okinawa   [Poster Presentation] Performance evaluation of unknown sound clustering for indoor-environmental sound classification based on self-generated acoustic model
Sakiko Mishima, Yukoh Wakabayashi, Takahiro Fukumori, Keisuke Imoto, Masato Nakayama, Takanobu Nishiura (Ritsumeikan Univ.) EA2017-152 SIP2017-161 SP2017-135
Indoor-environmental sound classification is useful for surveillance systems which monitor the situations in the dark an... [more] EA2017-152 SIP2017-161 SP2017-135
pp.277-280
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2017-12-21
12:50
Tokyo Waseda Univ. Green Computing Systems Research Organization [Poster Presentation] Development of Speaker/Environment-Dependent Acoustic Model for Non-Audible Murmur Recognition Based on DNN Adaptation
Seita Noda, Tomoki Hayashi, Tomoki Toda, Kazuya Takeda (Nagoya Univ.) SP2017-56
In this research, we aim to improve the performance of non-audible murmur (NAM) recognition towards the development of s... [more] SP2017-56
pp.7-10
SP, SIP, EA 2017-03-01
12:40
Okinawa Okinawa Industry Support Center [Poster Presentation] Indoor-environmental sound identification based on deep neural network with higher-dimensional features
Sakiko Mishima, Yukoh Wakabayashi, Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura (Ritsumeikan Univ.) EA2016-87 SIP2016-142 SP2016-82
Surveillance systems with a video camera have been utilized for the safety of people. It is important to identify the in... [more] EA2016-87 SIP2016-142 SP2016-82
pp.31-36
SP, SIP, EA 2017-03-01
12:40
Okinawa Okinawa Industry Support Center [Poster Presentation] An investigation of speaker adaptation method for DNN-based speech synthesis using speaker codes
Nobukatsu Hojo, Yusuke Ijima (NTT) EA2016-108 SIP2016-163 SP2016-103
In this work, we conducted objective evaluation experiments on the conventional speaker adaptation methods for DNN-based... [more] EA2016-108 SIP2016-163 SP2016-103
pp.147-152
SP, SIP, EA 2017-03-02
09:00
Okinawa Okinawa Industry Support Center [Poster Presentation] Study of branch selecting DNN acoustic model for robustness to environmental variation
Takafumi Moriya, Taichi Asami, Yoshikazu Yamaguchi, Yushi Aono (NTT) EA2016-131 SIP2016-186 SP2016-126
The performance of speech recognition tasks can be significantly improved by the use of deep neural networks (DNN). Spee... [more] EA2016-131 SIP2016-186 SP2016-126
pp.277-282
SP 2017-01-21
16:35
Tokyo The University of Tokyo Simultaneous modeling of acoustic feature sequences and its temporal structures for DNN-based speech synthesis
Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda (Nagoya Inst. of Tech.) SP2016-76
In statistical parametric speech synthesis, a hidden Markov model (HMM) is widely used as an acoustic model. Recently, d... [more] SP2016-76
pp.71-76
SP 2016-10-27
16:00
Shizuoka Shizuoka University. Word modeling for end-to-end Japanese speech recognition
Hitoshi Ito, Aiko Hagiwara, Manon Ichiki, Takeshi Mishima, Shoei Sato (NHK), Akio Kobayashi (NES) SP2016-47
In this paper, we propose a novel modeling for end-to-end Japanese speech recognition using Deep Neural Networks(DNN). W... [more] SP2016-47
pp.31-36
PRMU, IPSJ-CVIM, IBISML [detail] 2016-09-05
16:45
Toyama   Acoustic event detection and removal using LSTM-CTC for speech recognition
Yu Nasu (former Toshiba), Hiroshi Fujimura (Toshiba) PRMU2016-69 IBISML2016-24
Deep learning techniques have drastically increased the speech recognition performance. However, there are few practical... [more] PRMU2016-69 IBISML2016-24
pp.121-126
SP 2016-08-24
14:00
Kyoto ACCMS, Kyoto Univ. [Invited Talk] Unsupervised Music Understanding based on Hierarchical Bayesian Acoustic and Language Models
Kazuyoshi Yoshii (Kyoto Univ.) SP2016-29
This paper presents a statistical approach to unsupervised music understanding. Our goal is to estimate musical notes fr... [more] SP2016-29
pp.13-18
SP, IPSJ-SLP
(Joint)
2016-07-28
14:00
Yamagata Takinoyu Hotel Evaluation of Japanese English DNN Acoustic Models with English Level
Yuta Kawachi, Hirokazu Masataki, Taichi Asami, Yushi Aono (NTT) SP2016-20
In this paper, we propose an acoustic model that takes into consideration foreign language fluency level by extracting a... [more] SP2016-20
pp.1-6
SP, IPSJ-SLP
(Joint)
2016-07-28
15:45
Yamagata Takinoyu Hotel On the Use of Speaker Codes for Multi-Speaker Modeling in DNN-based Speech Synthesis
Nobukatsu Hojo, Yusuke Ijima (NTT), Hideyuki Mizuno (Tokyo University of Science, Suwa) SP2016-22
Recent studies have shown that DNN-based speech synthesis can generate more natural synthesized speech than the conventi... [more] SP2016-22
pp.13-18
SP 2015-08-21
16:15
Iwate Iwate Prefectural Univ. Training Data Selection for Acoustic Modeling Based on Submodular Optimization of Joint KL Divergence
Taichi Asami, Ryo Masumura, Hirokazu Masataki, Manabu Okamoto, Sumitaka Sakauchi (NTT) SP2015-58
This paper provides a novel training data selection method to
construct acoustic models for automatic speech recogniti... [more]
SP2015-58
pp.45-50
SP, IPSJ-SLP
(Joint)
2015-07-16
15:10
Nagano Katakura Suwako Hotel A study on discriminative approach for estimation of the divergence between distributions and its application to language identification
Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu (Tokyo Univ.) SP2015-38
In this paper, we propose a method for estimating the statistical divergence between probability distributions by a disc... [more] SP2015-38
pp.13-18
SP, IPSJ-SLP
(Joint)
2015-07-17
09:30
Nagano Katakura Suwako Hotel Multiple Feed-forward Deep Neural Networks for Statistical Parametric Speech Synthesis
Shinji Takaki (NII), SangJin Kim (Naver Labs), Junichi Yamagishi (NII), JongJin Kim (Naver Labs) SP2015-44
In this paper, we investigate a combination of several feed-forward deep neural networks (DNNs) for a high-quality stati... [more] SP2015-44
pp.49-54
SIP, EA, SP 2015-03-02
11:40
Okinawa   Optimization of impulse responses for model training in reverberant speech recognition
Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2014-78 SIP2014-119 SP2014-141
The reverberant speech degrades the speech recognition performance in the field of distant-talking speech. As one of app... [more] EA2014-78 SIP2014-119 SP2014-141
pp.37-42
NLC, IPSJ-NL, SP, IPSJ-SLP, JSAI-SLUD
(Joint) [detail]
2014-12-16
11:00
Kanagawa Tokyo Institute of Technology (Suzukakedai Campus) Speaker adaptation using speaker-normalized DNN based on speaker codes
Yosuke Kashiwagi, Daisuke Saito, Nobuaki Minematsu, Keikichi Hirose (Univ. of Tokyo) SP2014-118
Recently, deep neural network (DNN) becomes one of the main streams of acoustic modeling for automatic speech recognitio... [more] SP2014-118
pp.105-110
SP, IPSJ-MUS 2014-05-24
11:30
Tokyo   Native language recognition using machine learning
Ryota Sakagami, Kouki Takeshita, Longbiao Wang, Masahiro Iwahashi (Nagaoka Univ. of Tech) SP2014-13
The difference in pronunciation occurs in a non-native speaker and a native speaker. Therefore, communication is difficu... [more] SP2014-13
pp.139-141
SP 2014-02-28
10:30
Tokushima The University of Tokushima Evaluation of reverberant speech recognition by selecting suitable acoustic model with acoustic parameters
Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) SP2013-108
The reverberant speech degrades the speech recognition performance in the field of distant-talking speech. As one of app... [more] SP2013-108
pp.7-12
 Results 1 - 20 of 31  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan