Presentation 2011-02-17
An analytical method of semantic content for a large amount of video archive
Yoshihiko KAWAI, Mahito FUJII,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) A semantic content analysis is very important to retrieve a huge amount of video data efficiently. This paper proposes a method for extracting concepts such as objects and events from video archives based on an image processing. At first, the method calculates a feature vector for each shot based on gradient features in the local region and global features such as texture and color distribution. Two algorithms are used for calculating gradient features to improve detection accuracy. Then, the random forests method which is a kind of ensemble learning algorithm is used to decide if a shot includes a specific concept. We performed experiments to extract 130 kinds of concepts defined at TRECVID 2010 from video archives about 400 hours long.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Semantic contents analysis / bag-of-keypoints / texture feature / random forests / TRECVID
Paper # PRMU2010-214
Date of Issue

Conference Information
Committee PRMU
Conference Date 2011/2/10(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Pattern Recognition and Media Understanding (PRMU)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) An analytical method of semantic content for a large amount of video archive
Sub Title (in English)
Keyword(1) Semantic contents analysis
Keyword(2) bag-of-keypoints
Keyword(3) texture feature
Keyword(4) random forests
Keyword(5) TRECVID
1st Author's Name Yoshihiko KAWAI
1st Author's Affiliation Science and Technical Research Laboratories, NHK()
2nd Author's Name Mahito FUJII
2nd Author's Affiliation Science and Technical Research Laboratories, NHK
Date 2011-02-17
Paper # PRMU2010-214
Volume (vol) vol.110
Number (no) 414
Page pp.pp.-
#Pages 6
Date of Issue