Presentation 2022-12-23
Detecting emotion in speech expressing incongruent emotional cues through voice and content
Mariko Kikutani, Machiko Ikemoto,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) This research investigated how we detect emotion in speech when the emotional cues in the sound of voice do not match the semantic content. In the authors’ previous research, Japanese participants heard a voice emoting anger, happiness or sadness while saying “I’m angry”, “I’m pleased” or “I’m sad”. They reported how much they agree that the speaker is expressing each of the three emotions. Among the three emotions, voice was most important for perception of sadness. For anger and happiness perception, the participants prioritized the information from contents. Based on this study, the present research was conducted by adding facial expressions to those incongruent voice stimuli. The facial expression for each stimulus matched either of the emotion expressed in the voice or content. Even with facial expressions, the perception of anger and happiness prioritized emotional information available in content than voice. However, sadness perception relied more on voice emotion. Importantly, the magnitude of the impact of facial expression was not different depending on which modality, voice or content, showed the same emotion as the face. Participants perceived stronger extent of emotion from stimuli in which the content and face showed the same emotion than from those in which the voice and face showed the same emotion.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) emotion / modality / incongruent expression / language
Paper # HIP2022-69
Date of Issue 2022-12-15 (HIP)

Conference Information
Committee HIP
Conference Date 2022/12/22(2days)
Place (in Japanese) (See Japanese page)
Place (in English) Research Institute of Electrical Communication
Topics (in Japanese) (See Japanese page)
Topics (in English) Multi-modal, KANSEI information processing, Vision and its application, Lifelong sciences, Human information processing
Chair Yuji Wada(Ritsumeikan Univ.)
Vice Chair Hiroyuki Umemoto(AIST) / Sachiko Kiyokawa(Nagoya Univ.)
Secretary Hiroyuki Umemoto(Kyushu Univ.) / Sachiko Kiyokawa(NICT)
Assistant Ippei Negishi(Kanazawa Inst. of Tech.) / Daisuke Tanaka(Tottori Univ.)

Paper Information
Registration To Technical Committee on Human Information Processing
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Detecting emotion in speech expressing incongruent emotional cues through voice and content
Sub Title (in English) Investigation on dominant modality
Keyword(1) emotion
Keyword(2) modality
Keyword(3) incongruent expression
Keyword(4) language
1st Author's Name Mariko Kikutani
1st Author's Affiliation KKanazawa University(Kanazawa Univ.)
2nd Author's Name Machiko Ikemoto
2nd Author's Affiliation Doshisha University(Doshisha Univ.)
Date 2022-12-23
Paper # HIP2022-69
Volume (vol) vol.122
Number (no) HIP-326
Page pp.pp.59-64(HIP),
#Pages 6
Date of Issue 2022-12-15 (HIP)