Presentation 2004/12/13
WHAT HMMS CAN'T DO
Jeff A. Bilmes,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Hidden Markov models (HMMs) are the predominant methodology for automatic speech recognition (ASR) systems. Ever since their inception, it has been said that HMMs are an inadequate statistical model for such purposes. Results over the years have shown, however, that HMM-based ASR performance continually improves given enough training data and engineering effort. In this paper, we argue that there are, in theory at least, no theoretical limitations to the class of probability distributions representable by HMMs. In search ofa model to supersede the HMM for ASR, therefore, we should search for models with better parsimony, computational properties, noise insensitivity, and that better utilize high-level knowledge sources.
Keyword(in Japanese) (See Japanese page)
Keyword(in English)
Paper # NLC2004-45,SP2004-85
Date of Issue

Conference Information
Committee NLC
Conference Date 2004/12/13(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Natural Language Understanding and Models of Communication (NLC)
Language ENG
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) WHAT HMMS CAN'T DO
Sub Title (in English)
Keyword(1)
1st Author's Name Jeff A. Bilmes
1st Author's Affiliation Dept. of Electrical Engineering, University of Washington()
Date 2004/12/13
Paper # NLC2004-45,SP2004-85
Volume (vol) vol.104
Number (no) 538
Page pp.pp.-
#Pages 6
Date of Issue