IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 79  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
PRMU, MVE, VRSJ-SIG-MR, IPSJ-CVIM 2024-01-26
15:34
Kanagawa Keio Univ. (Hiyoshi Campus) Comparison of Imbalanced Data Handling Techniques in Emotion Estimation of Expressway Service Area Workers using Stacking Ensemble Learners for Complex Decision Boundaries
Akihiro Sato, Satoki Ogiso, Ryosuke Ichikari, Takeshi Kurata (AIST) PRMU2023-47
Estimating emotions of workers is promising to promote health and productivity management, while it has difficulty in c... [more] PRMU2023-47
pp.40-45
HCS, CNR 2023-11-05
14:40
Tokyo Kogakuin University
(Primary: On-site, Secondary: Online)
Development of a human-friendly robot using a behavioral model based on emotion recognition
Chika Kanesaki, Kenji Matsuzaka (NIT, UC) CNR2023-14 HCS2023-76
In smooth communication between humans, the recognition and understanding of nonverbal information caused by emotions is... [more] CNR2023-14 HCS2023-76
pp.38-44
HCS 2023-08-25
16:20
Hyogo
(Primary: On-site, Secondary: Online)
Modeling of Expressed Emotions in Puzzle Tasks and Estimation of the Emotion based on Laban Movement Analysis
Akari Kubota, Sota Fujiwara (Kwansei Gakuin Univ.), Saizo Aoyagi (Komazawa Univ.), MIchiya Yamamoto (Kwansei Gakuin Univ.) HCS2023-48
Research to recognize and estimate human emotions using such as facial expressions has been vigorously conducted. By foc... [more] HCS2023-48
pp.29-34
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-23
13:50
Tokyo
(Primary: On-site, Secondary: Online)
Speech Emotion Recognition based on Emotional Label Sequence Estimation Considering Phoneme Class Attribute
Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) SP2023-9
Recently, many researchers have tackled speech emotion recognition (SER), which predicts emotion conveyed by speech. In ... [more] SP2023-9
pp.42-47
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-23
13:50
Tokyo
(Primary: On-site, Secondary: Online)
[Poster Presentation] Generation of colored subtitle images based on emotional information of speech utterances
Fumiya Nakamura (Kobe Univ.), Ryo Aihara (Mitsubishi Electric), Ryoichi Takashima, Tetsuya Takiguchi (Kobe Univ.), Yusuke Itani (Mitsubishi Electric) SP2023-11
Conventional automatic subtitle generation systems based on speech recognition do not take into account paralinguistic i... [more] SP2023-11
pp.54-59
NC, MBE
(Joint)
2023-03-15
13:50
Tokyo The Univ. of Electro-Communications
(Primary: On-site, Secondary: Online)
Comparison of classification accuracy by frequency band restriction on emotion recognition from EEG
Raiki Yamane, Shin'ichiro Kanoh (SIT) NC2022-115
Accuracy of emotion classification in deep learning when frequency band restriction is used as a preprocessing method fo... [more] NC2022-115
pp.127-132
IPSJ-SLDM, RECONF, VLD [detail] 2023-01-24
10:55
Kanagawa Raiosha, Hiyoshi Campus, Keio University
(Primary: On-site, Secondary: Online)
Study on Wireless Transmission Data Reduction Method and Its Implementation in Emotion Recognition System Using Electroencephalogram
Yuuki Harada, Daisuke Kanemoto, Tetsuya Hirose (Osaka Univ.) VLD2022-66 RECONF2022-89
Recently, there has been a great deal of research on emotion recognition and its application using electroencephalogram.... [more] VLD2022-66 RECONF2022-89
pp.40-44
HCS 2022-10-27
14:20
Online Online Do mood states contribute to facial expression perception?
Qi Fan (Waseda Univ.), Kyoko Ito (Kyoto Tachibana Univ./Osaka Univ.), Fusako Koshikawa (Waseda Univ.) HCS2022-51
This study intends to investigate how mood states affect facial expression perception. The study examined how depression... [more] HCS2022-51
pp.18-23
HCS 2022-08-26
11:00
Hyogo
(Primary: On-site, Secondary: Online)
Analysis of Relationships among Body Movement, Eye Information, and Physiological Indices in Emotion Expressions during Video Watching
Hayate Yamada, Fumiya Kobayashi, Souta Fujiwara (Kwansei Gakuin Univ.), Saizo Aoyagi (Komazawa Univ.), Michiya Yamamoto (Kwansei Gakuin Univ.) HCS2022-36
With the recent progress of human sensing and AI technologies, many studies on recognition and estimation of human inter... [more] HCS2022-36
pp.3-8
HCGSYMPO
(2nd)
2021-12-15
- 2021-12-17
Online Online Emotion discrimination model for learning spatio-temporal frequency information of EEG using SNN
Rio Kanda, Chika Sugimoto (Yokohama National Univ.)
In order to achieve high accuracy in emotion recognition based on EEG, it is considered effective to simultaneously lear... [more]
HCGSYMPO
(2nd)
2021-12-15
- 2021-12-17
Online Online Modality-Independent Emotion Recognition Based on Hyper-Hemispherical Embedding and Latent Representation Unification Using Multimodal Deep Neural Networks
Seiichi Harata, Takuto Sakuma, Shohei Kato (NIT)
This study aims to obtain a mathematical representation of emotions (an emotion space) common to modalities.
The propos... [more]

NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2021-12-02
15:20
Online Online improvement of multilingual speech emotion recognition by normalizing features using CRNN
Jinhai Qi, Motoyuki Suzuki (OIT) NLC2021-22 SP2021-43
In this research, a new multilingual emotion recognition method by normalizing features using CRNN has been proposed. We... [more] NLC2021-22 SP2021-43
pp.22-26
AI 2020-12-11
10:55
Shizuoka Online and HAMAMATSU ACT CITY
(Primary: On-site, Secondary: Online)
An application of Differentiable Neural Architecture Search to Multimodal Neural Networks
Yushiro Funoki, Satoshi Ono (Kagoshima Univ) AI2020-11
This paper proposes a method that designs an architecture of a deep neural network for multimodal sequential data using ... [more] AI2020-11
pp.52-56
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2020-12-02
13:50
Online Online Multi-Modal Emotion Recognition by Integrating of Acoustic and Linguistic Features
Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) NLC2020-14 SP2020-17
In recent years, the advanced techique of deep learning has improved the performance of Speech Emotional Recognition as ... [more] NLC2020-14 SP2020-17
pp.7-12
IE, IMQ, MVE, CQ
(Joint) [detail]
2020-03-05
13:30
Fukuoka Kyushu Institute of Technology
(Cancelled but technical report was issued)
A Method of Automatic Generation of YouTube Thumbnails and Its Evaluations
Yuki Kakui, Akari Shimono, Toshihiko Yamasaki (UTokyo) IMQ2019-46 IE2019-128 MVE2019-67
Recently, YouTubers are becoming more and more popular, and how to make thumbnails attractive is important to attract vi... [more] IMQ2019-46 IE2019-128 MVE2019-67
pp.157-162
HPB
(2nd)
2020-02-14
16:10
Kanagawa   Performance evaluation of bullying voice judgement system using emotion analysis based on SVM
Takahiro Ueno, Shintaro Mori, Masayoshi Ohashi (Fukuoka Univ.)
Bullying is a serious problem all over the world, and bullying detection is significant to protect victims. In this pape... [more]
HCS 2020-01-26
16:40
Oita Room407, J:COM HorutoHall OITA (Oita) Children's emotion recognition based on vocal cues -- A review of the literature on vocal emotion recognition --
Naomi Watanabe, Tessei Kobayashi (NTT) HCS2019-82
(To be available after the conference date) [more] HCS2019-82
pp.163-166
MVE, IPSJ-CVIM 2020-01-24
10:40
Nara   [Short Paper] CNN-based Music Emotion and Theme Recognition Featuring Shallow Architecture
Shengzhou Yi, Xueting Wang, Toshihiko Yamasaki (UTokyo) MVE2019-32
We propose several convolutional neural networks to recognize emotions and themes conveyed by the audio tracks. We appli... [more] MVE2019-32
pp.99-100
HIP 2019-12-19
14:00
Miyagi RIEC, Tohoku University Mathematical Representation of Emotion by Combining Recognition and Unification Tasks Using Multimodal Deep Neural Networks
Seiichi Harata, Takuto Sakuma, Shohei Kato (NITech) HIP2019-65
To emulate human emotions in robots, the mathematical representation of emotion is important for all components of affec... [more] HIP2019-65
pp.1-6
HCGSYMPO
(2nd)
2019-12-11
- 2019-12-13
Hiroshima Hiroshima-ken Joho Plaza (Hiroshima) Crosslingual Emotion Recognition using English and Japanese Speech Data
Yuta Nirasawa, Atom Scotto, Ryota Sakuma, Yuki Hujita, Keiich Zempo (Tsukuba Univ.)
Since reasearch in Speech Emotion Recognition(SER) is performed with mostly English data, applying these models to Japan... [more]
 Results 1 - 20 of 79  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan