Presentation 1999/5/21
Multi-Modal Dialogue Corpus
Takuya KANEKO, Shun ISHIZAKI,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Prosody, Eye-Movement, Gesturing and Facial-Expressions will provide us essential information to understand spoken discourse as much as Phonological, morphological, syntactic and semantic cues do. Thus for computer-based interpretation of human face-to-face dialogue It is important to construct corpora which store those variety of information. The Multi-Modal Dialogue Corpus which we are now constructing on have those information with multi-media format. And now we have been developing the method for annotating each modality. This report gives the process of constructing Multi-Modal Dialogue Corpus and the tasks about annotation we are now faced with.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) dialogue / corpus / discourse / prosody / nonverbal / annotation
Paper # TL99-3
Date of Issue

Conference Information
Committee TL
Conference Date 1999/5/21(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Thought and Language (TL)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Multi-Modal Dialogue Corpus
Sub Title (in English)
Keyword(1) dialogue
Keyword(2) corpus
Keyword(3) discourse
Keyword(4) prosody
Keyword(5) nonverbal
Keyword(6) annotation
1st Author's Name Takuya KANEKO
1st Author's Affiliation Graduate School of Media and Governance Keio University()
2nd Author's Name Shun ISHIZAKI
2nd Author's Affiliation Graduate School of Media and Governance Keio University
Date 1999/5/21
Paper # TL99-3
Volume (vol) vol.99
Number (no) 76
Page pp.pp.-
#Pages 7
Date of Issue