Presentation 2014-06-14
Automatically-Generated Dictionary-based Emotion Recognition from Tweet Speech
Eri YANASE, Hiromitsu NISHIZAKI, Yoshihiro SEKIGUCHI,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) We make a study of utilization of linguistic information for classification of emotion in spoken utterances. In Japanese, there are a few researches on emotion recognition using linguistic features extracted from a spoken utterance. This paper describes an emotion recognition method that is based on an automatic construction of an emotional words/phrases dictionary from Twitter and an emotion recognition experiment using the dictionary. The dictionary is made by collecting so many tweets from Twitter, and the extracted words/phrases are registered to the dictionary. For an emotion recognition experiment, we prepared two types of test data; text tweets and spoken tweets. For the spoken tweets, we try two types of emotional words/phrases extraction methods; one is to use an automatic speech recognizer and the other is to use a spoken term detection technique. The experimental result showed that the emotion recognition accuracy was about 40% for the text tweets, and the accuracy for the spoken tweets was slightly lower than the one of the text tweets.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Emotion recognition / linguistic information / speech recognition / spoken term detection
Paper # NLC2014-4
Date of Issue

Conference Information
Committee NLC
Conference Date 2014/6/7(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Natural Language Understanding and Models of Communication (NLC)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Automatically-Generated Dictionary-based Emotion Recognition from Tweet Speech
Sub Title (in English)
Keyword(1) Emotion recognition
Keyword(2) linguistic information
Keyword(3) speech recognition
Keyword(4) spoken term detection
1st Author's Name Eri YANASE
1st Author's Affiliation Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi()
2nd Author's Name Hiromitsu NISHIZAKI
2nd Author's Affiliation Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi
3rd Author's Name Yoshihiro SEKIGUCHI
3rd Author's Affiliation Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi
Date 2014-06-14
Paper # NLC2014-4
Volume (vol) vol.114
Number (no) 81
Page pp.pp.-
#Pages 6
Date of Issue