Presentation 2002/9/21
Face Synthesis for Anthropomorphic Spoken Dialog Agent System
Tatsuo YOTSUKURA, Shigeo MORISHIMA,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) The multi-modal communication between man and machine style is to have a virtual human or an agent appearing on the computer terminal that should be able to understand and express not only linguistic information but also non-verbal information. This is similar to human-to-human communication with a face-to-face style. Very important factor to make an agent look believable or alive depends on how well an agent can duplicate a real human's expression and impression on a face precisely. Especially in case of communication application using agent, a real-time processing with low delay is inevitable. In this paper, we describe a current situation of our face image synthesis technology.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Agent / Face image synthesis / Lip-sync / FACS
Paper # HCS2002-22
Date of Issue

Conference Information
Committee HCS
Conference Date 2002/9/21(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Human Communication Science (HCS)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Face Synthesis for Anthropomorphic Spoken Dialog Agent System
Sub Title (in English)
Keyword(1) Agent
Keyword(2) Face image synthesis
Keyword(3) Lip-sync
Keyword(4) FACS
1st Author's Name Tatsuo YOTSUKURA
1st Author's Affiliation Faculty of Engineering, Seikei University()
2nd Author's Name Shigeo MORISHIMA
2nd Author's Affiliation Faculty of Engineering, Seikei University
Date 2002/9/21
Paper # HCS2002-22
Volume (vol) vol.102
Number (no) 342
Page pp.pp.-
#Pages 6
Date of Issue