IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 21 - 40 of 99 [Previous]  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-04
17:10
Online Online A Vocoder-free Any-to-Many Voice Conversion using Pre-trained vq-wav2vec
Takeshi Koshizuka, Hidefumi Ohmura, Kouichi Katsurada (TUS) EA2020-89 SIP2020-120 SP2020-54
Voice conversion (VC) is a technique that converts speaker-dependent non-linguistic information to another speaker's one... [more] EA2020-89 SIP2020-120 SP2020-54
pp.176-181
HCGSYMPO
(2nd)
2020-12-15
- 2020-12-17
Online Online Effects on Speakers' Behaviors in Speech Rate Converted Conversation by Keeping Constant of Listening Speed for Hearer
Hiroyuki Oba, Tamami Mizuta, Hiroko Tokunaga, Naoki Mukawa, Hiroto Saito (Tokyo Denki Univ.)
In this study, we evaluate the effects on speakers’ behaviors in speech rate converted conversation that keeps the liste... [more]
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2020-12-03
16:50
Online Online Stabilize Fundamental Frequency of StarGAN based Voice Conversion
Masashi Kimura (ConLab.), Hideyuki Kasuga (ZIKU) NLC2020-20 SP2020-23
Virtual Youtuber and Virtual Influencer is getting attention, which is video streamers with avator appearence createdcom... [more] NLC2020-20 SP2020-23
pp.34-37
WIT 2020-06-12
13:30
Online Online Improving the pronounce clarity of dysarthric speech using CycleGAN
Shuhei Imai, Takashi Nose, Aoi Kanagaki (Tohoku Univ.), Satoshi Watanabe (HTS), Akinori Ito (Tohoku Univ.) WIT2020-1
Several voice conversion systems have been developed that converts the dysarthric speech into healthy speech.The convent... [more] WIT2020-1
pp.1-6
SP, EA, SIP 2020-03-03
09:00
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
Evaluation of vocal personality and expression for speech synthesized by non-parallel voice conversion with narrative speech
Ryotaro Nagase, Keisuke Imoto, Ryosuke Yamanishi, Yoichi Yamashita (Ritsumeikan Univ.) EA2019-138 SIP2019-140 SP2019-87
In the technology of voice conversion, reproduction of emotion and intonation, pause is one of the research issues. Howe... [more] EA2019-138 SIP2019-140 SP2019-87
pp.213-218
SP, EA, SIP 2020-03-03
09:00
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
Cross-Lingual Voice Conversion using Cyclic Variational Auto-encoder
Hikaru Nakatani, Patrick Lumban Tobing, Kazuya Takeda, Tomoki Toda (Nagoya Univ.) EA2019-139 SIP2019-141 SP2019-88
In this report, we present a novel cross-lingual voice conversion (VC) method based on cyclic variational auto-encoder (... [more] EA2019-139 SIP2019-141 SP2019-88
pp.219-224
EA, SIP, SP 2019-03-15
13:30
Nagasaki i+Land nagasaki (Nagasaki-shi) [Poster Presentation] Robustness of statistical voice conversion based on waveform modification against external noise
Yusuke Kurita, Kazuhiro Kobayashi, Kazuya Takeda (Nagoya Univ.), Tomoki Toda (Nagoya Univ./JST PRESTO) EA2018-153 SIP2018-159 SP2018-115
In this report, we investigate the statistical voice conversion (VC) under noisy environments.
VC achieves conversion f... [more]
EA2018-153 SIP2018-159 SP2018-115
pp.317-322
SP 2018-08-27
11:35
Kyoto Kyoto Univ. [Poster Presentation] An Experimental Study on Transforming the Emotion in Speech using GAN
Kenji Yasuda, Ryohei Orihara, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga (UEC) SP2018-26
In domain transfer task deep learning has made it possible to generate more natural and highly accurate output. Especial... [more] SP2018-26
pp.19-22
AI 2018-07-02
15:50
Hokkaido   Transforming the Emotion in Speech using CycleGAN
Kenji Yasuda, Ryohei Orihara, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga (UEC) AI2018-11
In domain transfer task deep learning makes it possible to generate more natural and highly accurate output. Especially ... [more] AI2018-11
pp.61-66
PRMU, SP 2018-06-28
14:40
Nagano   Study of improving speech intelligibility for glossectomy patients via voice conversion with sound and lip movement.
Seiya Ogino, Hiroki Murakami, Sunao Hara, Masanobu Abe (Okayama Univ.) PRMU2018-23 SP2018-3
In this paper, we propose the multimodal voice conversion based on Deep Neural Network using audio and lip movement info... [more] PRMU2018-23 SP2018-3
pp.7-12
PRMU, SP 2018-06-28
15:10
Nagano   Multimodal voice conversion using deep bottleneck features and deep canonical correlation analysis
Satoshi Tamura, Kento Horio, Hajime Endo, Satoru Hayamizu (Gifu Univ.), Tomoki Toda (Nagoya Univ.) PRMU2018-24 SP2018-4
In this paper, we aim at improving the speech quality in voice conversion and propose a novel multi-modal voice conversi... [more] PRMU2018-24 SP2018-4
pp.13-18
SIP, EA, SP, MI
(Joint) [detail]
2018-03-19
10:25
Okinawa   Non-parallel and Many-to-Many Voice Conversion Using Variational Autoencoder Conditioned by Phonetic Posteriorgrams and d-vectors
Yuki Saito (NTT/Univ. of Tokyo), Yusuke Ijima, Kyosuke Nishida (NTT), Shinnosuke Takamichi (Univ. of Tokyo) EA2017-105 SIP2017-114 SP2017-88
This paper proposes novel frameworks for non-parallel and many-to-many voice conversion (VC) using variational autoencod... [more] EA2017-105 SIP2017-114 SP2017-88
pp.21-26
SIP, EA, SP, MI
(Joint) [detail]
2018-03-20
09:00
Okinawa   [Poster Presentation] Development of NU Voice Conversion System 2018
Patrick Lumban Tobing, Yi-Chiao Wu, Tomoki Hayashi, Kazuhiro Kobayashi (Nagoya Univ.), Tomoki Toda (Nagoya Univ./JST PRESTO) EA2017-138 SIP2017-147 SP2017-121
This paper presents NU (Nagoya University) voice conversion (VC) system for the HUB task of Voice
Conversion Challenge ... [more]
EA2017-138 SIP2017-147 SP2017-121
pp.203-208
SIP, EA, SP, MI
(Joint) [detail]
2018-03-20
09:00
Okinawa   [Poster Presentation] A Hybrid Approach on Electrolaryngeal Speech Enhancement based on Spectral Differential Features and Noise Suppression
Mohammad Eshghi, Kazuhiro Kobayashi, Tomoki Toda (Nagoya Univ.) EA2017-141 SIP2017-150 SP2017-124
This work presents a hybrid approach for enhancing the quality of the electrolaryngeal (EL) speech. Current hybrid enhan... [more] EA2017-141 SIP2017-150 SP2017-124
pp.221-226
SIP, EA, SP, MI
(Joint) [detail]
2018-03-20
14:45
Okinawa   Development of NU non-parallel Voice Conversion System 2018
Yi-Chiao Wu, Patrick Lumban Tobing, Tomoki Hayashi, Kazuhiro Kobayashi, Tomoki Toda (Nagoya Univ.) EA2017-172 SIP2017-181 SP2017-155
This paper introduces the NU non-parallel voice conversion (VC) system developed at Nagoya University for SPOKE task of ... [more] EA2017-172 SIP2017-181 SP2017-155
pp.385-390
SP, ASJ-H 2018-01-20
13:50
Tokyo The University of Tokyo DNN Based Voice Conversion Method Considering Outputs of Multiple Networks
Takuya Fujioka, Sun Qinghua (Hitachi) SP2017-68
In many conventional statistical voice conversion methods, the relations of source and target speech on all frames are e... [more] SP2017-68
pp.11-15
SP, ASJ-H 2018-01-21
16:00
Tokyo The University of Tokyo A study on voice conversion based on WaveNet
Jumpei Niwa, Takenori Yoshimura, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda (NIT) SP2017-84
This paper proposes a voice conversion technique based on WaveNet to directly generate target audio waveforms from acous... [more] SP2017-84
pp.99-104
SP, IPSJ-SLP
(Joint)
2017-07-27
14:30
Miyagi Akiu Resort Hotel Crescent [Invited Talk] Synthesis, Recognition and Conversion of Various Speech Using Deep Learning and Their Applications
Takashi Nose (Tohoku Univ.) SP2017-16
This paper focuses on synthesis, recognition and conversion of various speech in the speech processing using deep learni... [more] SP2017-16
pp.3-8
SP, IPSJ-SLP
(Joint)
2017-07-27
16:15
Miyagi Akiu Resort Hotel Crescent Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities and Evaluation of Dual Learning
Hiroyuki Miyoshi, Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari (Univ. of Tokyo) SP2017-17
Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional V... [more] SP2017-17
pp.9-14
SP 2017-01-21
11:00
Tokyo The University of Tokyo [Poster Presentation] A Study on Singer-Independent Singing Voice Conversion Using Read Speech Based on Neural Network
Harunori Koike, Takashi Nose, Akinori Ito (Tohoku Univ.) SP2016-67
There is a problem that the conventional method requires the speech of the source speaker for training. We proposed a me... [more] SP2016-67
pp.17-22
 Results 21 - 40 of 99 [Previous]  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan