Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
IA, ICSS |
2022-06-23 15:55 |
Nagasaki |
Univ. of Nagasaki (Primary: On-site, Secondary: Online) |
Development and Evaluation of Indoor Activity Tracking System Using Sensor Fusion with Audio and 3D Point Cloud Observation Mana Nishigaki, Hiroshi Yamamoto (Ritsumei Univ.) IA2022-7 ICSS2022-7 |
With the current spread of COVID-19, there has been a growing interest in monitoring behavior of people to control the s... [more] |
IA2022-7 ICSS2022-7 pp.37-42 |
CQ, CBE (Joint) |
2022-01-27 17:20 |
Ishikawa |
Kanazawa(Ishikawa Pref.) (Primary: On-site, Secondary: Online) |
Proposal and Validation of Packet Layer Quality Estimation Model for Voice Call Applications Itsuki Okada, Takanori Hayashi (HIT) CQ2021-86 |
Quality visualization and quality control during service are important to ensure that voice call applications are used w... [more] |
CQ2021-86 pp.56-61 |
EA, ASJ-H |
2021-07-15 09:30 |
Online |
Online |
Implementation details of interactive and real-time tools for sound visualization and processing using MATLAB Hideki Kawahara (Wakayama Univ.) EA2021-1 |
Objective measurements of speech data acquisition and presentation is clucial for assureing reploducibility and reusabil... [more] |
EA2021-1 pp.1-5 |
MVE, IMQ, IE, CQ (Joint) [detail] |
2021-03-02 15:30 |
Online |
Online |
A Study on Packet Layer Quality Evaluation Model for Voice Call Application Itsuki Okada, Tomoya Seno, Takanori Hayashi (Hiroshima Institute of Technology) CQ2020-114 |
Quality visualization and quality control during service are important to ensure that voice call applications are used w... [more] |
CQ2020-114 pp.34-37 |
WIT, SP |
2019-10-26 16:20 |
Kagoshima |
Daiichi Institute of Technology |
Application of interactive and real-time tools to acoustic analysis for assessment of dysphonia Hideki Kawahara (Wakayama Univ.), Ken-Ichi Sakakibara (Health Science Univ. Hokkaido), Kenta Wakasa (ATLUS), Hiroko Terasawa (Univ. Tsukuba) SP2019-24 WIT2019-23 |
We introduce a real-time and interactive tool for visualizing voice-source attributes. The tool provides a simplified ca... [more] |
SP2019-24 WIT2019-23 pp.39-43 |
KBSE |
2019-03-02 12:30 |
Kyoto |
Doshisha University Kambaikan |
Toward Automatic Generation of Meeting Minutes by Visualization of Voice Information Yusuke Nakamura, Takeshi Nakase, Satoshi Yajima, Yutaka Matsuno (Nihon Univ.) KBSE2018-54 |
Today, the minutes at meetings in organizations such as companies are important. However, it is impossible to see the mi... [more] |
KBSE2018-54 pp.1-5 |
PRMU, SP |
2018-06-29 10:30 |
Nagano |
|
Analysis of speech-to-texture sentiment association characteristics Win Thuzar Kyaw, Yoshinori Sagisaka (Waseda Univ.) PRMU2018-30 SP2018-10 |
Aiming at speech visualization using textures or finding texture generation scheme from sentiment information embedded i... [more] |
PRMU2018-30 SP2018-10 pp.47-52 |
LOIS |
2015-03-06 16:30 |
Okinawa |
|
A Study of Multi-Modal Speech Visualization for Deaf and Hard of Hearing People Support Yusuke Toba, Hiroyasu Horiuchi, Shinsuke Matsumoto, Sachio Saiki, Masahide Nakamura (Kobe Univ.), Tomohito Uchino, Tomohiro Yokoyama, Yasuhiro Takebayashi (School for the Deaf, University of Tsukuba) LOIS2014-94 |
Although deaf and hard of hearing (D/HH) people have various communication ways such as sign language, conversation by w... [more] |
LOIS2014-94 pp.191-196 |
WIT, SP |
2011-10-07 10:30 |
Tokyo |
TFT Bldg. |
Investigation of the normalized articulatory space for audio-visual real-time feedback of vowel speech and its application Tadashi Sakata, Yuya Saeki, Wataru Shibata, Yuichi Ueda (Kumamoto Univ.) SP2011-61 WIT2011-43 |
In speech learning of deaf children or speech rehabilitation of patients with dysarthria, an effective visual feedback o... [more] |
SP2011-61 WIT2011-43 pp.55-60 |
MVE, CQ, MoNA (Joint) |
2006-01-27 08:20 |
Oita |
Beppu Onsen Spring (Resorpia Beppu Hotel) |
A study to support communication using sound visualization Atsunobu Kimura, Masayuki Ihara, Minoru Kobayashi (NTT) |
When talking to another person face to face we can well estimate what the other person hears. Unfortunately, this abilit... [more] |
MVE2005-62 pp.33-38 |