Information and Systems-Image Engineering(Date:2017/01/21)

Presentation
A Study on the Construction of Articulatory to Acoustic Mapping by Using Deep Neural Network

Fumiaki Taguchi(Kyushu Univ.),  Tokihiko Kaburagi(Kyushu Univ.),  

[Date]2017-01-21
[Paper #]SP2016-73
[Invited Talk] Interesting! Deep learning for text-to-speech synthesis

Shinji Takaki(NII),  

[Date]2017-01-21
[Paper #]SP2016-71
[Poster Presentation] A Study on Singer-Independent Singing Voice Conversion Using Read Speech Based on Neural Network

Harunori Koike(Tohoku Univ.),  Takashi Nose(Tohoku Univ.),  Akinori Ito(Tohoku Univ.),  

[Date]2017-01-21
[Paper #]SP2016-67
[Poster Presentation] An Interactive Test System for Japanese Special Mora Pronunciation Using Smartphones and Its Evaluation

Sho Sasaki(Iwate Univ.),  Jouji Miwa(Iwate Univ.),  

[Date]2017-01-21
[Paper #]SP2016-68
[Poster Presentation] Evaluation of DNN-Based Voice Conversion Deceiving Anti-spoofing Verification

Yuki Saito(UT),  Shinnosuke Takamichi(UT),  Hiroshi Saruwatari(UT),  

[Date]2017-01-21
[Paper #]SP2016-69
Conversational Speech Synthesis dealing with Sequence of Sentences

Ishin Fukuoka(Waseda Univ.),  Kazuhiko Iwata(Waseda Univ.),  Tetsunori Kobayashi(Waseda Univ.),  

[Date]2017-01-21
[Paper #]SP2016-74
Spoken dialogue system with brain machine interface

Makoto Koike(MK Microwave Researh),  

[Date]2017-01-21
[Paper #]SP2016-65
[Invited Talk] Deep learning in voice conversion

Daisuke Saito(UTokyo),  

[Date]2017-01-21
[Paper #]SP2016-72
A study on DNN-based speech synthesis using vector quantization of spectral features

Takashi Nose(Tohoku Univ.),  Suzunosuke Ito(Tohoku Univ.),  

[Date]2017-01-21
[Paper #]SP2016-75
Non-parametric duration modelling for speech synthesis with a joint model of acoustics and duration

Gustav Eje Henter(NII),  Srikanth Ronanki(University of Edinburgh),  Oliver Watts(University of Edinburgh),  Simon King(University of Edinburgh),  

[Date]2017-01-21
[Paper #]SP2016-66
[Poster Presentation] Designing linguistic features for expressive speech synthesis using audiobooks

Chiaki Asai(Nagoya Inst. of Tech.),  Kei Sawada(Nagoya Inst. of Tech.),  Kei Hashimoto(Nagoya Inst. of Tech.),  Keiichiro Oura(Nagoya Inst. of Tech.),  Yoshihiko Nankaku(Nagoya Inst. of Tech.),  Keiichi Tokuda(Nagoya Inst. of Tech.),  

[Date]2017-01-21
[Paper #]SP2016-70
Simultaneous modeling of acoustic feature sequences and its temporal structures for DNN-based speech synthesis

Kei Hashimoto(Nagoya Inst. of Tech.),  Keiichiro Oura(Nagoya Inst. of Tech.),  Yoshihiko Nankaku(Nagoya Inst. of Tech.),  Keiichi Tokuda(Nagoya Inst. of Tech.),  

[Date]2017-01-21
[Paper #]SP2016-76