Presentation | 2021-03-04 Evaluation of Attention Fusion based Audio-Visual Target Speaker Extraction on Real Recordings Hiroshi Sato, Tsubasa Ochiai, Keisuke Kinoshita, Marc Delcroix, Tomohiro Nakatani, Shoko Araki, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | The audio-visual target speech extraction, which aims at extracting a target speaker's voice from a mixture with audio and visual clues, has received much interest. In previous works, the audio-visual target speaker extraction has shown more stable performance than single modality methods for simulated data. However, its adaptation towards realistic situations has not been fully explored as well as evaluations on real recorded mixtures. Especially, we focus on clue corruption problem that occurs often in real recordings. In this work, we propose a novel attention mechanism for multi-modal fusion and its training methods that enable to selective use of more reliable clues. We record an audio-visual dataset of simultaneous speech with realistic visual clue corruption, and show that audio-visual target speech extraction with our proposals successfully work on real data as well as on simulated data. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | audio-visual / multi-modal fusion / target speaker extraction / SpeakerBeam |
Paper # | EA2020-88,SIP2020-119,SP2020-53 |
Date of Issue | 2021-02-24 (EA, SIP, SP) |
Conference Information | |
Committee | EA / US / SP / SIP / IPSJ-SLP |
---|---|
Conference Date | 2021/3/3(2days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | Online |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | Speech, Engineering/Electro Acoustics, Signal Processing, Ultrasonics, and Related Topics |
Chair | Kenichi Furuya(Oita Univ.) / Hikaru Miura(Nihon Univ.) / Hisashi Kawai(NICT) / Kazunori Hayashi(Kyoto Univ.) / 北岡 教英(豊橋技科大) |
Vice Chair | Yoshinobu Kajikawa(Kansai Univ.) / Kentaro Matsui(NHK) / Jun Kondo(Shizuoka Univ.) / Yoshikazu Koike(Shibaura Inst. of Tech.) / / Yukihiro Bandou(NTT) / Toshihisa Tanaka(Tokyo Univ. Agri.&Tech.) |
Secretary | Yoshinobu Kajikawa(Univ. of Tokyo) / Kentaro Matsui(NTT) / Jun Kondo(Doshisha Univ.) / Yoshikazu Koike(Tohoku Univ.) / (Univ. of Tokyo) / Yukihiro Bandou(Waseda Univ.) / Toshihisa Tanaka(Hosei Univ.) / (Waseda Univ.) |
Assistant | Yukou Wakabayashi(Tokyo Metropolitan Univ.) / Tatsuya Komatsu(LINE) / Shinnosuke Hirata(Tokyo Inst. of Tech.) / Yusuke Ijima(NTT) / Yuichi Tanaka(Tokyo Univ. Agri.&Tech.) |
Paper Information | |
Registration To | Technical Committee on Engineering Acoustics / Technical Committee on Ultrasonics / Technical Committee on Speech / Technical Committee on Signal Processing / Special Interest Group on Spoken Language Processing |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | Evaluation of Attention Fusion based Audio-Visual Target Speaker Extraction on Real Recordings |
Sub Title (in English) | |
Keyword(1) | audio-visual |
Keyword(2) | multi-modal fusion |
Keyword(3) | target speaker extraction |
Keyword(4) | SpeakerBeam |
1st Author's Name | Hiroshi Sato |
1st Author's Affiliation | NTT Corporation(NTT) |
2nd Author's Name | Tsubasa Ochiai |
2nd Author's Affiliation | NTT Corporation(NTT) |
3rd Author's Name | Keisuke Kinoshita |
3rd Author's Affiliation | NTT Corporation(NTT) |
4th Author's Name | Marc Delcroix |
4th Author's Affiliation | NTT Corporation(NTT) |
5th Author's Name | Tomohiro Nakatani |
5th Author's Affiliation | NTT Corporation(NTT) |
6th Author's Name | Shoko Araki |
6th Author's Affiliation | NTT Corporation(NTT) |
Date | 2021-03-04 |
Paper # | EA2020-88,SIP2020-119,SP2020-53 |
Volume (vol) | vol.120 |
Number (no) | EA-397,SIP-398,SP-399 |
Page | pp.pp.170-175(EA), pp.170-175(SIP), pp.170-175(SP), |
#Pages | 6 |
Date of Issue | 2021-02-24 (EA, SIP, SP) |