Presentation | 2021-10-19 A study on model training for DNN-HSMM-based speech synthesis using a large-scale speech corpus Nobuyuki Nishizawa, Gen Hattori, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | In this study, an investigation into model training for DNN-HSMM-based speech synthesis using a large speech corpus collected for connection synthesis was conducted. While conventional HSMM-based speech synthesis uses decision trees to predict the HSMM parameters corresponding to the linguistic information, DNN-HSMM-based speech synthesis uses DNNs for this prediction. Thus, it is expected to synthesize higher quality sounds by the method. However, since the parameters of the state duration distributions of the HSMMs are simultaneously estimated by the training, the training by the stochastic gradient method may not properly progress in the early stage of model training where the states cannot be appropriately aligned with training data yet. In particular, the behavior of training of RNNs using LSTM (long short-term memory) for DNN-HSMM-based speech synthesis has not yet been sufficiently studied. The experimental results show that the model can be trained from the randomly initialized states by setting the learning rate of the optimizer appropriately, and the training data size at which performance of the prediction saturates is more than 20.6 hours where using a three-layer bidirectional RNN where each layer consists of 2048-cell LSTMs. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | DNN-HSMM-based speech synthesis / hidden semi-Marcov models / large-scale speech corpus |
Paper # | SP2021-34,WIT2021-27 |
Date of Issue | 2021-10-12 (SP, WIT) |
Conference Information | |
Committee | SP / WIT / IPSJ-SLP / ASJ-H |
---|---|
Conference Date | 2021/10/19(1days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | Online |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | |
Chair | Norihide Kitaoka(Toyohashi Univ. of Tec) / Shinji Sakou(Nagoya Inst. of Tech.) / Norihide Kitaoka(Toyohashi Univ. of Tec) / Hiroaki Kato(NICT) |
Vice Chair | / Tomohiro Amemiya(Univ. of Tokyo) / / Shuichi Sakamoto(Tohoku University) |
Secretary | (Univ. of Tokyo) / Tomohiro Amemiya(Kobe Univ.) / (Saitama Industrial Tech. Center) / Shuichi Sakamoto(Teikyo Univ.) |
Assistant | Toru Nakashika(Univ. of Electro-Comm.) / Ryo Masumura(NTT) / Minako Hosono(AIST) / Aki Sugano(Nagoya Univ.) / Tomoyasu Komori(NHK) |
Paper Information | |
Registration To | Technical Committee on Speech / Technical Committee on Well-being Information Technology / Special Interest Group on Spoken Language Processing / Auditory Research Meeting |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | A study on model training for DNN-HSMM-based speech synthesis using a large-scale speech corpus |
Sub Title (in English) | |
Keyword(1) | DNN-HSMM-based speech synthesis |
Keyword(2) | hidden semi-Marcov models |
Keyword(3) | large-scale speech corpus |
1st Author's Name | Nobuyuki Nishizawa |
1st Author's Affiliation | KDDI Research, Inc.(KDDI Research) |
2nd Author's Name | Gen Hattori |
2nd Author's Affiliation | KDDI Research, Inc.(KDDI Research) |
Date | 2021-10-19 |
Paper # | SP2021-34,WIT2021-27 |
Volume (vol) | vol.121 |
Number (no) | SP-202,WIT-203 |
Page | pp.pp.52-57(SP), pp.52-57(WIT), |
#Pages | 6 |
Date of Issue | 2021-10-12 (SP, WIT) |