Presentation | 2018-01-18 Generating Video of Human Motion Clone from a Still Image using Shape and Color Refinement Networks Yusuke Miyawaki, Kazuaki Nakamura, Seiko Myojin, Naoko Nitta, Noboru Babaguchi, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | In this paper, we propose a method for generating a video in which a person virtually performs a certain motion that are not performed in the real world. We refer to such motions as {it human motion clone} in this paper. Some existing methods generate a video of human motion clone by the following two steps using a still image called {it target image}, which includes an entire body of the target person, and a {it reference video} in which another person performs the target motion with his/her entire body: First, manipulate the contour of the human region in the target image so that its shape is well matched to that in the reference video. Second, warp the texture of the original target image to the manipulated human region by 2D affine transformations. However, this procedure often yields unnatural results due to inaccurate correspondence between the human contours in the reference video and the target image. Moreover, its texture is sometimes enlarged or reduced too much by misestimated affine transformations. To overcome these problems, the proposed method employs a shape refinement network (SRN), which refines the inaccurate correspondence between the contours, and a color refinement network (CRN), which refines the unsuccessfully warped textures. In our experimental results, distortion of the human contours and the textures appearing in a resultant video is partially improved. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | human motion clone / reference video / target image / Shape Refinement Network / Color Refinement Network |
Paper # | PRMU2017-115,MVE2017-36 |
Date of Issue | 2018-01-11 (PRMU, MVE) |
Conference Information | |
Committee | PRMU / MVE / IPSJ-CVIM |
---|---|
Conference Date | 2018/1/18(2days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | |
Chair | Shinichi Sato(NII) / Yoshinari Kameda(Univ. of Tsukuba) |
Vice Chair | Hironobu Fujiyoshi(Chubu Univ.) / Yoshihisa Ijiri(Omron) / Kenji Mase(Nagoya Univ.) |
Secretary | Hironobu Fujiyoshi(AIST) / Yoshihisa Ijiri(NAIST) / Kenji Mase(Kyoto Univ.) / (NTT) |
Assistant | Masato Ishii(NEC) / Yusuke Sugano(Osaka Univ.) / Takatsugu Hirayama(Nagoya Univ.) / Ryosuke Aoki(NTT) |
Paper Information | |
Registration To | Technical Committee on Pattern Recognition and Media Understanding / Technical Committee on Media Experience and Virtual Environment / Special Interest Group on Computer Vision and Image Media |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | Generating Video of Human Motion Clone from a Still Image using Shape and Color Refinement Networks |
Sub Title (in English) | |
Keyword(1) | human motion clone |
Keyword(2) | reference video |
Keyword(3) | target image |
Keyword(4) | Shape Refinement Network |
Keyword(5) | Color Refinement Network |
1st Author's Name | Yusuke Miyawaki |
1st Author's Affiliation | Osaka University(Osaka Univ.) |
2nd Author's Name | Kazuaki Nakamura |
2nd Author's Affiliation | Osaka University(Osaka Univ.) |
3rd Author's Name | Seiko Myojin |
3rd Author's Affiliation | Osaka University(Osaka Univ.) |
4th Author's Name | Naoko Nitta |
4th Author's Affiliation | Osaka University(Osaka Univ.) |
5th Author's Name | Noboru Babaguchi |
5th Author's Affiliation | Osaka University(Osaka Univ.) |
Date | 2018-01-18 |
Paper # | PRMU2017-115,MVE2017-36 |
Volume (vol) | vol.117 |
Number (no) | PRMU-391,MVE-392 |
Page | pp.pp.45-50(PRMU), pp.45-50(MVE), |
#Pages | 6 |
Date of Issue | 2018-01-11 (PRMU, MVE) |