Presentation 1996/9/12
Voiced / Unvoiced / Mixed Excitation Classification of Continuous Speech using Electroglottogram
Y. Kishi, T. Funada,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) We present an algorithm for automatically classifying speech into categories (voiced / unvoiced / mixed and silent) without manual operation from speech signal and electroglottogram (EGG). In developing the algorithm, it is required to use the speech signal and EGG which have been already classified. We make an algorithm using manually classified speech signal and EGG. In this reserch, by considering the periodicity of the numerical series of glottal closure instant, extracted from EGG with neural-network, we got correct classification accuacies of 74.6% on the mixed category and 96.7% on the voiced category.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) electroglottogram / mixed excitation / glottal closure instant
Paper # DSP-96-71,SP-96-46
Date of Issue

Conference Information
Committee DSP
Conference Date 1996/9/12(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Digital Signal Processing (DSP)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Voiced / Unvoiced / Mixed Excitation Classification of Continuous Speech using Electroglottogram
Sub Title (in English)
Keyword(1) electroglottogram
Keyword(2) mixed excitation
Keyword(3) glottal closure instant
1st Author's Name Y. Kishi
1st Author's Affiliation Faculty of Engineering, Kanazawa University()
2nd Author's Name T. Funada
2nd Author's Affiliation Faculty of Engineering, Kanazawa University
Date 1996/9/12
Paper # DSP-96-71,SP-96-46
Volume (vol) vol.96
Number (no) 238
Page pp.pp.-
#Pages 7
Date of Issue