Presentation 2003/3/11
Modeling the Relation Between Speech Acoustics and 3D Face Motion
BARBOSA Adriano VILELA, YEHIA Hani CAMILLE, Eric VATIKIOTIS-BATESON,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) This work examines the relation between the audible and visible events occurring during speech. Speech acoustics and face motion data are acquired simultaneously and represented parametrically. Mathematical models incorporating these parameters are used to estimate the relation between the two measurement domains. Specifically, face motions are estimated from the acoustics.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Speech acoustics / facial motion / audiovisual speech / NARMAX models
Paper # HIP2002-65
Date of Issue

Conference Information
Committee HIP
Conference Date 2003/3/11(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Human Information Processing (HIP)
Language ENG
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Modeling the Relation Between Speech Acoustics and 3D Face Motion
Sub Title (in English)
Keyword(1) Speech acoustics
Keyword(2) facial motion
Keyword(3) audiovisual speech
Keyword(4) NARMAX models
1st Author's Name BARBOSA Adriano VILELA
1st Author's Affiliation ATR Human Information Science Laboratories:CEFALA-UFMG-Universidade Federal de Minas Gerais()
2nd Author's Name YEHIA Hani CAMILLE
2nd Author's Affiliation CEFALA-UFMG-Universidade Federal de Minas Gerais
3rd Author's Name Eric VATIKIOTIS-BATESON
3rd Author's Affiliation ATR Human Information Science Laboratories:University of British Columbia
Date 2003/3/11
Paper # HIP2002-65
Volume (vol) vol.102
Number (no) 735
Page pp.pp.-
#Pages 6
Date of Issue