IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 46  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
PRMU, IPSJ-CVIM 2021-03-05
09:45
Online Online Improved Speech Separation Performance from Monaural Mixed Speech Based on Deep Embedding Network
Shaoxiang Dang, Tetsuya Matsumoto, Hiroaki Kudo (Nagoya Univ.), Yoshinori Takeuchi (Daido Univ.) PRMU2020-85
Speech separation refers to the separation of utterances in which multiple people are speaking simultaneously. The idea ... [more] PRMU2020-85
pp.91-96
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
13:05
Online Online [Invited Talk] *
Masahito Togami (LINE) EA2020-64 SIP2020-95 SP2020-29
Recently, deep learning based speech source separation has been evolved rapidly. A neural network (NN) is usually learne... [more] EA2020-64 SIP2020-95 SP2020-29
pp.27-32
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
14:05
Online Online [Poster Presentation] Noise-robust time-domain speech separation with basis signals for noise
Kohei Ozamoto (Tokyo Tech), Koji Iwano (TCU), Kuniaki Uto, Koichi Shinoda (Tokyo Tech) EA2020-70 SIP2020-101 SP2020-35
Recently, speech separation using deep learning has been extensively studied. TasNet, a time-domain method that directly... [more] EA2020-70 SIP2020-101 SP2020-35
pp.63-67
EA, SIP, SP 2019-03-14
10:25
Nagasaki i+Land nagasaki (Nagasaki-shi) Blind speech separation based on approximate joint diagonalization utilizing correlation between neighboring frequency bins
Taiki Asamizu, Toshihiro Furukawa (TUS) EA2018-100 SIP2018-106 SP2018-62
In this paper, we propose a new method that extends the approximate joint diagonalization blind speech separation (BSS).... [more] EA2018-100 SIP2018-106 SP2018-62
pp.7-12
EA, SIP, SP 2019-03-15
13:30
Nagasaki i+Land nagasaki (Nagasaki-shi) [Poster Presentation] Design and Evaluation of Ladder Denoising Autoencoder for Auditory Speech Feature Extraction of Overlapped Speech Separation
Hiroshi Sekiguchi, Yoshiaki Narusue, Hiroyuki Morikawa (Univ. of Tokyo) EA2018-155 SIP2018-161 SP2018-117
Primates and mammalian distinguish overlapped speech sounds from one another by recognizing a single sound source whethe... [more] EA2018-155 SIP2018-161 SP2018-117
pp.329-333
EA, ASJ-H 2018-08-23
12:55
Miyagi Tohoku Gakuin Univ. Self-produced speech enhancement and suppression method with wearable air- and body-conductive microphones
Moe Takada, Shogo Seki, Tomoki Toda (Nagoya Univ.) EA2018-29
This paper presents a self-produced speech enhancement and suppression method for multichannel signals recorded with bot... [more] EA2018-29
pp.7-12
SP, IPSJ-SLP
(Joint)
2018-07-26
16:15
Shizuoka Sago-Royal-Hotel (Hamamatsu) Ladder Network Driven from Auditory Computational Model for Multi-talker Speech Separation
Hiroshi Sekiguchi, Yoshiaki Narusue, Hiroyuki Morikawa (Univ. of Tokyo) SP2018-18
This paper introduces ladder network implementation induced by auditory computational model for multi-talker speech sepa... [more] SP2018-18
pp.9-13
SIP, EA, SP, MI
(Joint) [detail]
2018-03-19
09:25
Okinawa   Stable Estimation Method of Spatial Correlation Matrices for Multi-channel NMF
Yuuki Tachioka (Denso IT Lab) EA2017-103 SIP2017-112 SP2017-86
Multi-channel non-negative matrix factorization (MNMF) achieves a high sound source separation performance but its initi... [more] EA2017-103 SIP2017-112 SP2017-86
pp.7-12
EA 2018-02-16
13:10
Hiroshima Pref. Univ. Hiroshima The effect of increasing the number of channels with multi-channel non-negative matrix factorization for noisy speech recognition
Takanobu Uramoto (Oita Univ.), Youhei Okato, Toshiyuki Hanazawa (Mitsubishi Electric), Iori Miura, Shingo Uenohara, Ken'ich Furuya (Oita Univ.) EA2017-99
Nonnegative Matrix Factorization (NMF) factorizes a non-negative matrix into two non-negative matrices. In the field of ... [more] EA2017-99
pp.33-38
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2017-12-22
11:20
Tokyo Waseda Univ. Green Computing Systems Research Organization A Sound Source Separation Method for Multiple Person Speech Recognition using Wavelet Analysis Based on Sound Source Position Obtained by Depth Sensor
Nobuhiro Uehara, Kazuo Ikeshiro, Hiroki Imamura (Soka Univ.) SP2017-63
Recently, voice information guidance systems are used for only one person in operating at a city hall. To realize operat... [more] SP2017-63
pp.79-83
SIS 2017-12-14
10:50
Tottori Tottori Prefectural Center for Lifelong Learning Harmonic Structure Detection in Speech Separation Using Modified DFT Pair Based on ASA
Motohiro Ichikawa, Isao Nakanishi (Tottori Univ) SIS2017-34
Humans have the ability of cocktail party effect to be able to recognized the target voice from the various conversation... [more] SIS2017-34
pp.5-9
WIT, SP 2017-10-19
13:20
Fukuoka Tobata Library of Kyutech (Kitakyushu) Speech enhancement of utterance while playing with werewolf game "JINRO" based on NMF
Shunsuke Kawano, Toru Takahashi (OSU) SP2017-35 WIT2017-31
We describe that speech enhancement for natural and multi speaker dialognue. To record natural and multi speaker dialogn... [more] SP2017-35 WIT2017-31
pp.7-12
SP 2017-08-30
11:00
Kyoto Kyoto Univ. [Poster Presentation] Semi-blind speech separation and enhancement using recurrent neural network
Masaya Wake, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara (Kyoto Univ.) SP2017-22
This paper describes a semi-blind speech enhancement method using a neural network.
In a human-robot speech interaction... [more]
SP2017-22
pp.13-18
PRMU, SP 2017-06-22
14:45
Miyagi   Postfiltering of STFT Spectrograms Based on Generative Adversarial Networks
Takuhiro Kaneko (NTT), Shinji Takaki (NII), Hirokazu Kameoka (NTT), Junichi Yamagishi (NII) PRMU2017-28 SP2017-4
This paper presents postfiltering of short-term Fourier transform (STFT) spectrograms based on Generative Adversarial Ne... [more] PRMU2017-28 SP2017-4
pp.17-22
CAS, ICTSSL 2017-01-26
09:00
Tokyo Kikai-Shinko-Kaikan Bldg. Target Sound Enhancement by Post Processing of Sound Source Separation
Naoki Shinohara, Kenji Suyama (Tokyo Denki Univ.) CAS2016-77 ICTSSL2016-31
Although several methods have been proposed for sound source separation, a suppression ability of interference sound is ... [more] CAS2016-77 ICTSSL2016-31
pp.1-6
EA, EMM 2015-11-12
17:00
Kumamoto Kumamoto Univ. Noise suppression method for body-conducted soft speech based on external noise monitoring
Yusuke Tajiri (NAIST), Tomoki Toda (Nagoya Univ.), Satoshi Nakamura (NAIST) EA2015-31 EMM2015-52
As one of the silent speech interfaces, nonaudible murmur (NAM) microphone has been developed for detecting an extremely... [more] EA2015-31 EMM2015-52
pp.41-46
SIS, IPSJ-AVM 2015-09-03
11:20
Osaka Kansai Univ. A Sequential Processing Model for Speech Separation Based on Auditory Scene Analysis
Isao Nakanishi, Junichi Hanada, Misaki Baba (Tottori Univ.) SIS2015-16
Speech separation based on auditory scene analysis (ASA) has been widely studied.
We propose a processing method of the... [more]
SIS2015-16
pp.7-12
EA 2014-10-24
14:20
Tokyo Central Research Laboratory, Hitachi, Ltd. [Invited Talk] Speech enhancement techniques in multi-speaker spontaneous speech recognition for conversation scene analysis
Shoko Araki, Takaaki Hori, Tomohiro Nakatani (NTT) EA2014-25
This paper illustrates speech enhancement techniques for multi-speaker distant-talk speech recognition, where a conversa... [more] EA2014-25
pp.9-14
SIP 2014-08-28
16:55
Osaka Ritsumeikan Univ. (Osaka Umeda Campus) A Method for Sequential Speech Separation Based on Auditory Scene Analysis
Junichi Hanada, Isao Nakanishi, Shigang Li (Tottori Univ.) SIP2014-80
We propose a sequentially processing method of the speech separation based on auditory scene analysis (ASA). [more] SIP2014-80
pp.37-42
SP, WIT, ASJ-H 2014-06-20
10:25
Ishikawa   Accurate Recognition of Overlapped Speech -- High Speed Speech Separation by Spectral Subtraction and Acoustic Model Training using Separated Speeches --
Yuto Dekiura, Tetsuya Matsumoto, Yoshinori Takeuchi, Hiroaki Kudo, Noboru Ohnishi, Norihide Kitaoka, Kazuya Takeda (Nagoya Univ.) SP2014-56 WIT2014-11
The purpose of this study is to recognize overlapped speech more accurately. In order to achieve this, it is necessary t... [more] SP2014-56 WIT2014-11
pp.57-62
 Results 1 - 20 of 46  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan