Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
HCGSYMPO (2nd) |
2023-12-11 - 2023-12-13 |
Fukuoka |
Asia pacific Import Mart (Kitakyushu) (Primary: On-site, Secondary: Online) |
Analysis and Recognition of Gaze Functions Based on Multimodal Nonverbal Behaviors in Conversations Ayane Tashiro, Mai Imamura (YNU), Shiro Kumano (NTT), Kazuhiro Otsuka (YNU) |
This paper presents a framework for analyzing and recognizing gaze functions in group conversations. We first defined 43... [more] |
|
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2023-02-22 13:00 |
Hokkaido |
Hokkaido Univ. |
Assessment System of Remote Structured Interview using Bimodal Neural Network Shengzhou Yi (UTokyo), Toshiaki Yamasaki (Talent and Assessment), Toshihiko Yamasaki (UTokyo) ITS2022-67 IE2022-84 |
A structured interview is a data collection method that relies on asking questions in a set of order to eliminate subjec... [more] |
ITS2022-67 IE2022-84 pp.141-146 |
IE, ITS, ITE-AIT, ITE-ME, ITE-MMS [detail] |
2022-02-22 15:20 |
Online |
Online |
A Note on Perceived Visual Content Estimation Based on Compressed Reconstruction Network Using Brain Signals While Gazing on Images Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama (Hokkaido University) |
In this paper, we propose a method to reconstruct a perceived image using brain signals obtained during gazing images. S... [more] |
|
HCGSYMPO (2nd) |
2021-12-15 - 2021-12-17 |
Online |
Online |
Modality-Independent Emotion Recognition Based on Hyper-Hemispherical Embedding and Latent Representation Unification Using Multimodal Deep Neural Networks Seiichi Harata, Takuto Sakuma, Shohei Kato (NIT) |
This study aims to obtain a mathematical representation of emotions (an emotion space) common to modalities.
The propos... [more] |
|
AI |
2020-12-11 10:55 |
Shizuoka |
Online and HAMAMATSU ACT CITY (Primary: On-site, Secondary: Online) |
An application of Differentiable Neural Architecture Search to Multimodal Neural Networks Yushiro Funoki, Satoshi Ono (Kagoshima Univ) AI2020-11 |
This paper proposes a method that designs an architecture of a deep neural network for multimodal sequential data using ... [more] |
AI2020-11 pp.52-56 |
ITE-HI, IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2020-02-28 15:25 |
Hokkaido |
Hokkaido Univ. (Cancelled but technical report was issued) |
Make Your Presentation Better: Oral Presentation Support System using Linguistic and Acoustic Features Shengzhou Yi (UTokyo), Takuya Yamamoto, Osamu Yamamoto, Yukiyoshi Katsumizu, Hiroshi Yumoto (P&I), Xueting Wang, Toshihiko Yamasaki (UTokyo) ITS2019-53 IE2019-91 |
In order to help presenters to improve their oral presentation skills, we propose a support system to provide impression... [more] |
ITS2019-53 IE2019-91 pp.317-322 |
HCS |
2019-08-23 16:15 |
Osaka |
Jikei Institute |
Estimating Exchange-level Annotations with Multitask Learning for Multimodal Dialogue Systems Yuki Hirano, Shogo Okada (JAIST), Haruto Nishimoto, Kazunori Komatani (Osaka Univ.) HCS2019-32 |
This study presents multimodal computational modeling
for estimating three labels: user's interest label, user's sentim... [more] |
HCS2019-32 pp.15-20 |
MVE, ITE-HI, ITE-SIP [detail] |
2019-06-10 10:30 |
Tokyo |
|
Impression Prediction of Oral Presentation Using LSTM with Dot-product Attention Mechanism Shengzhou Yi, Xueting Wang, Toshihiko Yamasaki (UTokyo) MVE2019-1 |
For automatically evaluating oral presentation, we propose an end-to-end system to predict audience’s impression on spee... [more] |
MVE2019-1 pp.1-6 |
PRMU, SP |
2018-06-28 14:40 |
Nagano |
|
Study of improving speech intelligibility for glossectomy patients via voice conversion with sound and lip movement. Seiya Ogino, Hiroki Murakami, Sunao Hara, Masanobu Abe (Okayama Univ.) PRMU2018-23 SP2018-3 |
In this paper, we propose the multimodal voice conversion based on Deep Neural Network using audio and lip movement info... [more] |
PRMU2018-23 SP2018-3 pp.7-12 |
HCS |
2017-08-21 14:10 |
Tokyo |
Seikei University |
Meeting Extracts for Group Discussions using Multimodal Convolutional Neural Networks Fumio Nihei, Yukiko Nakano, Yutaka Takase (Seikei Univ.) HCS2017-57 |
With the goal of extracting meeting minutes from group discussion corpus, this study proposes multimodal fusion models b... [more] |
HCS2017-57 pp.55-59 |
IBISML |
2016-11-17 14:00 |
Kyoto |
Kyoto Univ. |
[Poster Presentation]
Analysis of Multimodal Deep Neural Networks
-- Towards the elucidation of the modality integration mechanism -- Yoh-ichi Mototake, Takashi Ikegami (unit of Tokyo) IBISML2016-97 |
With the rapid development of information technology in recent years,
several machine learning algorithms that integra... [more] |
IBISML2016-97 pp.369-373 |