Presentation 2021-03-04
A Vocoder-free Any-to-Many Voice Conversion using Pre-trained vq-wav2vec
Takeshi Koshizuka, Hidefumi Ohmura, Kouichi Katsurada,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Voice conversion (VC) is a technique that converts speaker-dependent non-linguistic information to another speaker's one while retaining the linguistic information of input speeches. A typical VC system is composed of two modules: an encoder module which removes speaker individuality from the speech, and a decoder module which incorporates another speaker's individuality to the synthesized speech. In this paper, we propose a vocoder-free any-to-many voice conversion model using the pre-trained vq-wav2vec as an encoder module. Our model makes it possible to convert speech using only a small amount of training data by pre-training the RNN_MS like decoder module in addition to pre-training the encoder module. The difference from the previous approach which also pre-trains both the encoder and the decoder modules is that our target is any-to-many voice conversion and the decoder module is pre-trained with the voice conversion task. The experimental results show that we could obtain good conversion performance. We have also confirmed the system can add new target speakers without deteriorating the performance of conversion for the pre-trained target speakers.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Any-to-Many Voice Conversion / Encoder-Decoder model / Pre-training / vq-wav2vec
Paper # EA2020-89,SIP2020-120,SP2020-54
Date of Issue 2021-02-24 (EA, SIP, SP)

Conference Information
Committee EA / US / SP / SIP / IPSJ-SLP
Conference Date 2021/3/3(2days)
Place (in Japanese) (See Japanese page)
Place (in English) Online
Topics (in Japanese) (See Japanese page)
Topics (in English) Speech, Engineering/Electro Acoustics, Signal Processing, Ultrasonics, and Related Topics
Chair Kenichi Furuya(Oita Univ.) / Hikaru Miura(Nihon Univ.) / Hisashi Kawai(NICT) / Kazunori Hayashi(Kyoto Univ.) / 北岡 教英(豊橋技科大)
Vice Chair Yoshinobu Kajikawa(Kansai Univ.) / Kentaro Matsui(NHK) / Jun Kondo(Shizuoka Univ.) / Yoshikazu Koike(Shibaura Inst. of Tech.) / / Yukihiro Bandou(NTT) / Toshihisa Tanaka(Tokyo Univ. Agri.&Tech.)
Secretary Yoshinobu Kajikawa(Univ. of Tokyo) / Kentaro Matsui(NTT) / Jun Kondo(Doshisha Univ.) / Yoshikazu Koike(Tohoku Univ.) / (Univ. of Tokyo) / Yukihiro Bandou(Waseda Univ.) / Toshihisa Tanaka(Hosei Univ.) / (Waseda Univ.)
Assistant Yukou Wakabayashi(Tokyo Metropolitan Univ.) / Tatsuya Komatsu(LINE) / Shinnosuke Hirata(Tokyo Inst. of Tech.) / Yusuke Ijima(NTT) / Yuichi Tanaka(Tokyo Univ. Agri.&Tech.)

Paper Information
Registration To Technical Committee on Engineering Acoustics / Technical Committee on Ultrasonics / Technical Committee on Speech / Technical Committee on Signal Processing / Special Interest Group on Spoken Language Processing
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) A Vocoder-free Any-to-Many Voice Conversion using Pre-trained vq-wav2vec
Sub Title (in English)
Keyword(1) Any-to-Many Voice Conversion
Keyword(2) Encoder-Decoder model
Keyword(3) Pre-training
Keyword(4) vq-wav2vec
1st Author's Name Takeshi Koshizuka
1st Author's Affiliation Tokyo University of Science(TUS)
2nd Author's Name Hidefumi Ohmura
2nd Author's Affiliation Tokyo University of Science(TUS)
3rd Author's Name Kouichi Katsurada
3rd Author's Affiliation Tokyo University of Science(TUS)
Date 2021-03-04
Paper # EA2020-89,SIP2020-120,SP2020-54
Volume (vol) vol.120
Number (no) EA-397,SIP-398,SP-399
Page pp.pp.176-181(EA), pp.176-181(SIP), pp.176-181(SP),
#Pages 6
Date of Issue 2021-02-24 (EA, SIP, SP)