Presentation 2000/1/20
Virtual Object Synthesis from Omnidirectional Views by Adaptively Selected Reference Images in Mixed Reality
Toshihiro KOBAYASHI, Long QUAN, Yuichi OHTA,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) We propose a linear method that can synthesize virtual object from omnidirectional views by using plenty of reference images in the image-based rendering framework. All of the process, synthesizing a virtual object, recovering of observer's camera pose necessary for the reference image selection, are based on only 2D information and are implemented by a linear method to enable the real time fusion of real and virtual worlds.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) mixed reality / image-based rendering / affine view / camera pose / linear method
Paper # PRMU99-195
Date of Issue

Conference Information
Committee PRMU
Conference Date 2000/1/20(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Pattern Recognition and Media Understanding (PRMU)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Virtual Object Synthesis from Omnidirectional Views by Adaptively Selected Reference Images in Mixed Reality
Sub Title (in English)
Keyword(1) mixed reality
Keyword(2) image-based rendering
Keyword(3) affine view
Keyword(4) camera pose
Keyword(5) linear method
1st Author's Name Toshihiro KOBAYASHI
1st Author's Affiliation College of Engineering Systems, University of Tsukuba()
2nd Author's Name Long QUAN
2nd Author's Affiliation CNRS-GRAVIR-INRIA
3rd Author's Name Yuichi OHTA
3rd Author's Affiliation College of Engineering Systems, University of Tsukuba
Date 2000/1/20
Paper # PRMU99-195
Volume (vol) vol.99
Number (no) 574
Page pp.pp.-
#Pages 8
Date of Issue