Information and Systems-Speech(Date:2022/11/29)

Presentation
Link Prediction from Text Content by NLP Graph Embedding

Tzu-Ying Yang(NTNU),  Hsuan Lei Shao(NTNU),  Chih-Chuan Fan(NTNU),  Wei-Hsin Wang(NTNU),  

[Date]2022-11-29
[Paper #]NLC2022-9,SP2022-29
Density Ratio Approach-based multiple Encoder-Decoder ASR model integration

Keigo Hojo(TUT),  Daiki Mori(TUT),  Yukoh Wakabayashi(TUT),  Atsunori Ogawa(NTT),  Norihide Kitaoka(TUT),  

[Date]2022-11-29
[Paper #]NLC2022-10,SP2022-30
Semi-supervised joint training of text to speech and automatic speech recognition using unpaired text data

Naoki Makishima(NTT),  Satoshi Suzuki(NTT),  Atsushi Ando(NTT),  Ryo Masumura(NTT),  

[Date]2022-11-30
[Paper #]NLC2022-14,SP2022-34
Representing how it is said with what is said

Naoaki Suzuki(NAIST),  Satoshi Nakamura(NAIST),  

[Date]2022-11-30
[Paper #]NLC2022-15,SP2022-35
Detecting Persona Information in Chat using Subject Recovery with Machine Translation

Shinji Muraji(Hokkaido Univ.),  Toshihiko Ito(Hokkaido Univ.),  Kenji Araki(Hokkaido Univ.),  

[Date]2022-11-30
[Paper #]NLC2022-11,SP2022-31
Handling time information for matching broadcast metadata with user-generated text

Takeshi S. Kobayakawa(NHK),  Takeshi Sakaki(Univ. Tokyo),  Masanao Ochi(Univ. Tokyo),  Sakata Ichiro(Univ. Tokyo),  

[Date]2022-11-30
[Paper #]NLC2022-12,SP2022-32
Dialogue disfluency detection using context

Hiroto Nakashima(KIT),  Kazutaka Shimada(KIT),  

[Date]2022-11-30
[Paper #]NLC2022-13,SP2022-33
A Japanese Automatic Speech Recognition System on the Next-Gen Kaldi Framework

Wen Shen Teo(UEC),  Yasuhiro Minami(UEC),  

[Date]2022-12-01
[Paper #]NLC2022-16,SP2022-36
Domain and language adaptation of large-scale pretrained model for speech recognition of low-resource language

Kak Soky(Kyoto University),  Sheng Li(NICT),  Chenhui Chu(Kyoto University),  Tatsuya Kawahara(Kyoto University),  

[Date]2022-12-01
[Paper #]NLC2022-17,SP2022-37
ASR model adaptation to target domain with large-scale audio data without transcription

Takahiro Kinouchi(TUT),  Daiki Mori(TUT),  Ogawa Atsunori(NTT),  Norihide Kitaoka(TUT),  

[Date]2022-12-01
[Paper #]NLC2022-18,SP2022-38