Presentation 2003/1/17
Video Sequence Translation of Multi-Speakers Conversation Scene
Akinobu MAEJIMA, Shigeo MORISHIMA, Satoshi NAKAMURA,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) In the paper, we describe a technique of a video-sequence translation in two or more speaker's conversation scene. In a conversation scene, a movement of the face of the person in a video-sequence is estimated, and an utterance judging is performed about each speaker who exists in an image. By replacing the mouth part with synthesized image synchronizing with the sound prepared independently, lip-synchronization to other language of the changed contents of utterance is realized.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Automatic Face Tracking / Utterance Judging / Lip-Synchronization / Synthesize Video Animation
Paper # HCS2002-30
Date of Issue

Conference Information
Committee HCS
Conference Date 2003/1/17(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Human Communication Science (HCS)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Video Sequence Translation of Multi-Speakers Conversation Scene
Sub Title (in English)
Keyword(1) Automatic Face Tracking
Keyword(2) Utterance Judging
Keyword(3) Lip-Synchronization
Keyword(4) Synthesize Video Animation
1st Author's Name Akinobu MAEJIMA
1st Author's Affiliation Faculty of Engineering, Seikei University()
2nd Author's Name Shigeo MORISHIMA
2nd Author's Affiliation Faculty of Engineering, Seikei University
3rd Author's Name Satoshi NAKAMURA
3rd Author's Affiliation Faculty of Engineering, Seikei University
Date 2003/1/17
Paper # HCS2002-30
Volume (vol) vol.102
Number (no) 598
Page pp.pp.-
#Pages 6
Date of Issue