IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 125  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
EA 2024-05-22
14:15
Online Online 未定 -- 未定 --
Tsubasa Ochiai (NTT), Kazuma Iwamoto (Doshisha Univ.), Marc Delcroix, Rintaro Ikeshita, Hiroshi Sato, Shoko Araki (NTT), Shigeru Katagiri (Doshisha Univ.)
(To be available after the conference date) [more]
SIS 2024-03-14
13:00
Kanagawa Kanagawa Institute of Technology
(Primary: On-site, Secondary: Online)
On Time-Position Detection of Signals under Noise Considering Threshold -- Applications of Fractal Dimension Filters --
Hideo Shibayama (Shibaura Institute of Technology), Yoshiaki Makabe (Kanagawa Institute of Technology), Kenji Muto (Shibaura Institute of Technology), Tomoaki Kimura (Kanagawa Institute of Technology) SIS2023-45
Conflicts due to neighborhood noise can occur even when the sound pressure level is low. In such cases, the sound pressu... [more] SIS2023-45
pp.1-6
CAS, CS 2024-03-14
13:30
Okinawa   Characterization of Semantic Communications in Speech Signal Transmission
Futo Iwanaga, Daisuke Umehara (Kyoto Inst. of Tech.) CAS2023-118 CS2023-111
In recent years, the volume of data in data communication has surged, Characterization of Semantic Communications in Spe... [more] CAS2023-118 CS2023-111
pp.41-46
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
10:40
Okinawa
(Primary: On-site, Secondary: Online)
An Investigation on the Speech Recovery from EEG Signals Using Transformer
Tomoaki Mizuno (The Univ. of Electro-Communications), Takuya Kishida (Aichi Shukutoku Univ.), Natsue Yoshimura (Tokyo Tech), Toru Nakashika (The Univ. of Electro-Communications) EA2023-108 SIP2023-155 SP2023-90
Synthesizing full speech from ElectroEncephaloGraphy(EEG) signals is a challenging task. In this paper, speech reconstru... [more] EA2023-108 SIP2023-155 SP2023-90
pp.277-282
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
15:25
Okinawa
(Primary: On-site, Secondary: Online)
Investigation of objective intelligibility metrics based on speech foundation models for Clarity Prediction Challenge 2
Katsuhiko Yamamoto (CyberAgent) EA2023-119 SIP2023-166 SP2023-101
Speech Foundation Models (SFMs), which use components like the encoder layer of Whisper, have been suggested to separate... [more] EA2023-119 SIP2023-166 SP2023-101
pp.334-339
EA, ASJ-H, ASJ-MA, ASJ-SP 2023-07-02
15:10
Hokkaido   Speech Restoration of Spectrogram Images Printed in a Document "Visible Speech" Published in 1947
Naofumi Aoki (Hokkaido Univ.) EA2023-6
The restoration of speech materials recorded in the past might be regarded as a study in acoustical archeology. It may p... [more] EA2023-6
pp.12-15
HIP, HCS, HI-SIGCOASTER [detail] 2023-05-15
10:20
Okinawa Okinawa Industry Support Center
(Primary: On-site, Secondary: Online)
Cognitive Load Estimation of Speech-in-Noise Recall Task with State-Space Models
Mateusz Dubiel (uni.lu), Minoru Nakayama (Tokyo Tech.), Xin Wang (NII) HCS2023-7 HIP2023-7
Cognitive workload during a listening and recall task was estimated using a state-space model based on metrics of pupill... [more] HCS2023-7 HIP2023-7
pp.29-32
PRMU, IBISML, IPSJ-CVIM [detail] 2023-03-02
15:10
Hokkaido Future University Hakodate
(Primary: On-site, Secondary: Online)
[Invited Talk] --
Yuma Koizumi (Google Research) PRMU2022-87 IBISML2022-94
Machine learning tasks that deal with acoustic signals can be broadly classified into "recognizing sounds" and "generati... [more] PRMU2022-87 IBISML2022-94
p.149
SP, IPSJ-SLP, EA, SIP [detail] 2023-02-28
13:00
Okinawa
(Primary: On-site, Secondary: Online)
[Invited Talk] Multiple sound spot synthesis meets multilingual speech synthesis -- Implementation is really all we need --
Takuma Okamoto (NICT) EA2022-87 SIP2022-131 SP2022-51
A multilingual multiple sound spot synthesis system is implemented as a user interface for real-time speech translation ... [more] EA2022-87 SIP2022-131 SP2022-51
pp.73-76
HCGSYMPO
(2nd)
2022-12-14
- 2022-12-16
Kagawa Onsite (Sunport Takamatsu) and Online
(Primary: On-site, Secondary: Online)
Modelling cognitive load with ocular responses during a noisy synthetic speech recall task
Mateusz Dubiel (uni.lu), Minoru Nakayama (Tokyo Tech.), Xin Wang (NII)
We applied state-space models to estimate the cognitive workload based
on participants' reactions to speech signals (i... [more]

EA, EMM, ASJ-H 2022-11-22
13:00
Online Online [Fellow Memorial Lecture] Security and Privacy Preservation for Speech Signal -- Approach from speech information hiding technology --
Masashi Unoki (JAIST) EA2022-60 EMM2022-60
Non-authentic but skillfully fabricated artificial replicas of authentic media in the real world are known as “media clo... [more] EA2022-60 EMM2022-60
pp.99-104
SP, IPSJ-MUS, IPSJ-SLP [detail] 2022-06-18
15:00
Online Online Unsupervised Training of Sequential Neural Beamformer Using Blindly-separated and Non-separated Signals
Kohei Saijo, Tetsuji Ogawa (Waseda Univ.) SP2022-25
We present an unsupervised training method of the sequential neural beamformer (Seq-NBF) using the separated signals fro... [more] SP2022-25
pp.110-115
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
14:05
Online Online [Poster Presentation] A unified source-filter network for neural vocoder
Reo Yoneyama, Yi-Chiao Wu, Tomoki Toda (Nagoya Univ.) EA2020-69 SIP2020-100 SP2020-34
In this paper, we propose a method to develop a neural vocoder using a single network based on the source-filter theory.... [more] EA2020-69 SIP2020-100 SP2020-34
pp.57-62
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
14:05
Online Online [Poster Presentation] Noise-robust time-domain speech separation with basis signals for noise
Kohei Ozamoto (Tokyo Tech), Koji Iwano (TCU), Kuniaki Uto, Koichi Shinoda (Tokyo Tech) EA2020-70 SIP2020-101 SP2020-35
Recently, speech separation using deep learning has been extensively studied. TasNet, a time-domain method that directly... [more] EA2020-70 SIP2020-101 SP2020-35
pp.63-67
SIS 2020-03-06
14:40
Saitama Saitama Hall
(Cancelled but technical report was issued)
An implusive noise detection method in noisy speech signals using the short time Fourier transform
Sho Hasegawa, Eisuke Horita (Kanazawa Univ.) SIS2019-58
The short-time Fourier transform based method to detect impulsive noises from speech signals is expressed. Especially W... [more] SIS2019-58
pp.119-124
SP, EA, SIP 2020-03-02
13:00
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
Data augmentation for ASR system by using locally time-reversed speech -- Temporal inversion of feature sequence --
Takanori Ashihara, Tomohiro Tanaka, Takafumi Moriya, Ryo Masumura, Yusuke Shinohara, Makio Kashino (NTT) EA2019-110 SIP2019-112 SP2019-59
Data augmentation is one of the techniques to mitigate overfitting and improve robustness against several acoustic varia... [more] EA2019-110 SIP2019-112 SP2019-59
pp.53-58
SP, EA, SIP 2020-03-03
09:00
Okinawa Okinawa Industry Support Center
(Cancelled but technical report was issued)
Semi-supervised Self-produced Speech Enhancement and Suppression Based on Joint Source Modeling of Air- and Body-conducted Signals Using Variational Autoencoder
Shogo Seki, Moe Takada, Kazuya Takeda, Tomoki Toda (Nagoya Univ.) EA2019-140 SIP2019-142 SP2019-89
This paper proposes a semi-supervised method for enhancing and suppressing self-produced speech, using a variational aut... [more] EA2019-140 SIP2019-142 SP2019-89
pp.225-230
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2019-12-06
13:55
Tokyo NHK Science & Technology Research Labs. [Poster Presentation] Time-Varying Complex AR speech analysis based on l2-norm regularization
Keiichi Funaki (Univ. of the Ryukyus) SP2019-41
Linear prediction (LP) is a mathematical operation estimating an all-pole spectrum from the speech
signal. It is an ess... [more]
SP2019-41
pp.73-77
WIT, SP 2019-10-27
09:20
Kagoshima Daiichi Institute of Technology Word Recognition using word likelihood vector from speech-imagery EEG
Satoka Hirata, Yurie Iribe (Aichi Prefectual Univ.), Kentaro Fukai, Kouichi Katsurada (Tokyo Univ. of Science), Tsuneo Nitta (Waseda Univ./Toyohashi Univ. of Tech.) SP2019-29 WIT2019-28
Previous research suggests that humans manipulate the machine using their electroencephalogram called BCI (Brain Compute... [more] SP2019-29 WIT2019-28
pp.69-73
EA, ASJ-H 2019-08-09
10:30
Miyagi Tohoku Univ. Study on Robust Method for Blindly Estimating Speech Transmission Index using Convolutional Neural Network with Temporal Amplitude Envelope
Suradej Doungpummet (JAIST), Jessada Karunjana (NASDA), Waree Kongprawechnon (SIIT), Masashi Unoki (JAIST) EA2019-30
We have developed a robust scheme for blindly estimating speech transmission index (STI) in noisy reverberant environmen... [more] EA2019-30
pp.47-52
 Results 1 - 20 of 125  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan