Presentation 2005/5/13
Segmentation of Sign Language for making HMM
Kana KAWAHIGASHI, Yoshiaki SHIRAI, Nobutaka SHIMADA, Jun MIURA,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) In this paper, HMM is made by extracting hand features of position and shape, and sign language is recognized. When face and hands are overlapped, hands regions are extracted by considering occlusion. Appropriate features of hands are extracted from hands region. In training phase, image sequences are segmented automatically to decide the number of the states. The experimental result is shown for recognition using HMM.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Sign Language / HMM / State / Feature Extraction
Paper # WIT2005-21
Date of Issue

Conference Information
Committee WIT
Conference Date 2005/5/13(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Well-being Information Technology(WIT)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Segmentation of Sign Language for making HMM
Sub Title (in English)
Keyword(1) Sign Language
Keyword(2) HMM
Keyword(3) State
Keyword(4) Feature Extraction
1st Author's Name Kana KAWAHIGASHI
1st Author's Affiliation Department of Mechanical Systems Graduated School of Engineering Osaka University()
2nd Author's Name Yoshiaki SHIRAI
2nd Author's Affiliation Department of human and Computer Intelligence Ritsumeikan University
3rd Author's Name Nobutaka SHIMADA
3rd Author's Affiliation Department of human and Computer Intelligence Ritsumeikan University
4th Author's Name Jun MIURA
4th Author's Affiliation Department of Mechanical Systems Graduated School of Engineering Osaka University
Date 2005/5/13
Paper # WIT2005-21
Volume (vol) vol.105
Number (no) 67
Page pp.pp.-
#Pages 6
Date of Issue