SP Subject Index 1996

{SP96-1} T. Moriyama, T. Yamashita, K. Hajiri, H. Ogawa and S. Tanpaku,
``Proposing and estimating a model of fundamental frequency control incorporating vertical laryngeal movement,''
IEICE Technical Report, SP96-1, pp.1--8, May 1996.
{ Fundamental frequency, Sternohyoid muscle, Cricothyroid muscle }
                                                    
{SP96-2} I. Masuda and K. Aikawa,
``A Method for predicting of perceived spectral sequences based on a FM-tracking model,''
IEICE Technical Report, SP96-2, pp.9--16, May 1996.
{ Phonemic restoration, Auditory scene analysis, Prediction, Signal separation, Noise reduction }
                                                    
{SP96-3} H. Sakakibara, M. Saio, T. Nakai and H. Suzuki,
``FEM analysis of acoustic transmission of the nasal tract and its modified shape observed by MRI,''
IEICE Technical Report, SP96-3, pp.17--22, May 1996.
{ }
                                                    
{SP96-4} Y. Kawanishi, J. Dang, K. Honda and H. Suzuki,
``Variations of the end correction coefficient of branches within an acoustic tube with their shapes,''
IEICE Technical Report, SP96-4, pp.23--30, May 1996.
{ Acoustic tube, End correction, End correction coefficient, Vocal tract model, Transmission line model }
                                                    
{SP96-5} T. Kaburagi and M. Honda,
``On calibration methods for a magnetic position-sensing system,''
IEICE Technical Report, SP96-5, pp.31--38, May 1996.
{ Speech production, Articulator movement, Magnetic position-sensing system, Calibration method }
                                                    
{SP96-6} H. Banno, K. Takeda and F. Itakura,
``Speech morphing by progressive interpolation of spectra,''
IEICE Technical Report, SP96-6, pp.39--44, May 1996.
{ speech morphing, interpolation, DP matching }
                                                    
{SP96-7} N. Campbell and A. Black,
``Chatr: a multi-lingual speech re-sequencing synthesis system,''
IEICE Technical Report, SP96-7, pp.45--52, May 1996.
{ speech synthesis, random access, signal processing, natural speech, multilingual sunthesis }
                                                    
{SP96-8} T. Koyama and N. Koizumi,
``Speech Synthesis By Rule Based On VCV Waveform Synthesis Units,''
IEICE Technical Report, SP96-8, pp.53--60, May 1996.
{ speech synthesis, synthesis by rule, synthesis units, unvoiced syllable }
                                                    
{SP96-9} R. Oguro, N. Kondo and K. Ozeki,
``Performance Comparison of Scoring Functions Continuous Japanese Speech Recognition,''
IEICE Technical Report, SP96-9, pp.61--68, May 1996.
{ ergodic HMM, scoring function, word-spotting, normalization of likelihood }
                                                     
{SP96-10} P. Luo and K. Ozeki,
``Speech Feature Parameter Normalization Using Affine Transformation,''
IEICE Technical Report, SP96-10, pp.69--74, May 1996.
{ speaker normalization, speaker-independent speech recognition, speaker adaptation, speech feature parameter, affine transformation }
                                                    
{SP96-11} N. Eguchi and K. Ozeki,
``Dependency Analysis of Japanese Sentences Using Prosodic Information,''
IEICE Technical Report, SP96-11, pp.75--81, May 1996.
{ prosodic information, dependency analysis, dependency distance, dependency rule, minimization of total penalty }
                                                    
{SP96-12} M. Ida and S. Nakagawa,
``Comparison between Beam Search and A* Search Methods for Speech Recognition,''
IEICE Technical Report, SP96-12, pp.1--8, June 1996.
{ speech recognition, beam search, A* search, reduction of search space }
                                                        	
{SP96-13}  M. Kato, A. Ito and M. Kohda,
``A Study on Utilizing to Word Preselection Using Optimal Phonemes Sequence,''
IEICE Technical Report, SP96-13, pp.9--14, June 1996.
{ speech recognition, hidden Markov model, word preselection, phoneme DP matching }
                                                        
{SP96-14}  H. Yamamoto, T. Kosaka, M. Yamada, Y. Komori and M. Fujita,
``Fast speech recognition algorithm under noisy environment using modified CMS-PMC and IDMM+SQ,''
IEICE Technical Report, SP96-14, pp.15--22, June 1996.
{ speech recognition, noisy environment, fast recognition algorithm, HMM, environment adaptation, scalar quantization, independent dimension multi-mixture computation }
                                                        
{SP96-15}  T. Endo, M. Nakazawa, K. Furukawa and R. Oka,
``Dynamic replacement of reference patterns for a real-time and frame-wise word spotting system,''
IEICE Technical Report, SP96-15, pp.23--28, June 1996.
{ Real-time, Spotting, topic, large vocabulary }
                                                        
{SP96-16}  K. Furukawa, M. Nakazawa, T. Endo and R. Oka,
``Decreasing computational burden using the Orientation Pattern in RIFCDP,''
IEICE Technical Report, SP96-16, pp.29--36, June 1996.
{ Orientation Pattern, speech reference, RIFCDP, Cepstrum, Decreasing }
                                                        
{SP96-17}   K. Markov and S. Nakagawa,
``Text-Independent Speaker Recognition System using Frame Level Likelihood Processing,''
IEICE Technical Report, SP96-17, pp.37--44, June 1996.
{ speaker identification, speaker verification, likelihood normalization, frame level processing }
                                                        
{SP96-18} M. Yoram and K. Hirose,
``Language Training System Using Speech Processing Techniques,''
IEICE Technical Report, SP96-18, pp.45--52, June 1996.
{ speech training, computer assisted learning, speech recognition, speech synthesis, speech modification, fundamental frequency contour modeling }
                                                        
{SP96-19} M. Shozakai, S. Nakamura and K. Shikano,
``A Study of Speech Enhancement Approach and Model Adaptation Approach for Speech Recognition,''
IEICE Technical Report, SP96-19, pp.53--60, June 1996.
{ speech recognition, speech enhancement, model adaptation, additive noise, multiplicative distortion }
                                                        
{SP96-20} Y. Osaka and S. Makino,
``Phoneme Recognition using Phoneme Duration Estimation based on Speaking Rate,''
IEICE Technical Report, SP96-20, pp.1--6, June 1996.
{ phoneme recognition, speaking rate, phoneme duration }
                                                        
{SP96-21} Y. Okimoto and S. Makino,
``Phoneme recognition using variable length reference templates and discriminative training,''
IEICE Technical Report, SP96-21, pp.7--14, June 1996.
{ discriminative training, LVQ, DP-matching, 2-stage-DP }
                                                        
{SP96-22} T. Hori, M. Katoh, A. Ito and M. Kohda,
``A Study on HM-Net using Successive State Splitting based on Phonetic Decision Tree,''
IEICE Technical Report, SP96-22, pp.15--22, June 1996.
{ speech recognition, hidden Markov model, context-dependent model, hidden Markov network, phonetic decision tree }
                                                        
{SP96-23} K. Ohkura and M. IIda,
``VFS speaker adaptation by controlling method for the interpolation and the smoothing parameters based classification error,''
IEICE Technical Report, SP96-23, pp.23--30, June 1996.
{ HMM, Speaker adaptation, Word recognition, Maximum a posteriori estimation }
                                                        
{SP96-24} T. Fukada, Y. Taniguchi and Y. Sagisaka,
``Model Parameter Estimation for Mixture Density Stochastic Segment Models,''
IEICE Technical Report, SP96-24, pp.31--38, June 1996.
{ stochastic segment model, mixture density, EM algorithm, HMM, speech recognition }
                                                        
{SP96-25} T. Otsuki, K. Ninomiya, T. Nishizaki and T. Ohtomo,
``The Property of Phoneme Labels Given by Auto-Study,''
IEICE Technical Report, SP96-25, pp.39--46, June 1996.
{ auto-study, auto-labeling, phoneme recognition, dual-width windowed segment, LVQ }
                                                        
{SP96-26} H. Matsuo and M. Ishigame,
``Evaluation of Isolated Word Recognition Method using Non Left-Right HMM,''
IEICE Technical Report, SP96-26, pp.47--52, June 1996.
{ HMM, non left-right HMM, speaker independent isolated word recognition, training section, concatenated training }
                                                        
{SP96-27} K. Ninomiya, T. Otsuki and T. Ohtomo,
``Study of LVQ Speech Recognition System Using Segmental Unit Input with Dual-Width Window,''
IEICE Technical Report, SP96-27, pp.53--60, June 1996.
{ cepstrum, △ cepstrum, LVQ, segmental unit input with dual-width window, K-L expansion }
                                                        
{SP96-28} M. Nakazawa, T. Endo, K. Furukawa, J. Toyoura and R. Oka,
``A Study on Speech Summary Using Demi-Phoneme Symbols Generated from Speech Waves ,''
IEICE Technical Report, SP96-28, pp.61--68, June 1996.
{ speech summary, topic summary, speech recognition, demi-phoneme symbols }
                                                        
{SP96-29} N. Minematsu and S. Nakagawa,
``Automatic Identification of Words with Type 1 Accent Based Upon the Accent Nucleus Detection at the Head of Words Using HMMs,''
IEICE Technical Report, SP96-29, pp.69--74, June 1996.
{ prosodic feature, accent nucleus, vowel detection, speech perception, speech recognition, HMM }
                                                        
{SP96-30} Y. Niimi and Y. Kobayashi,
``A Dialog Control Strategy Based on the Reliability of Speech Recognition   ,''
IEICE Technical Report, SP96-30, pp.75--80, June 1996.
{ dialog control strategy, reliability of recognition, performance of a dialog system }
                                                        
{SP96-31} H. Kokubo, Y. Sagisaka, N. Suzuki and M. Okada,
``Situated Parser -- An organic parsing architecture for spontaneous speech --,''
IEICE Technical Report, SP96-31, pp.81--88, June 1996.
{ parser, spontaneous speech, organic architecture, Agent network architecture, emergence }
                                                        
{SP96-32} T. Nishimoto, N. Shida, T. Kobayashi, S. Haruyama and T. Kabayashi,
``The Effect of Users' Experiences on Speech Input System and Non-Command Word Rejection Using Face Image,''
IEICE Technical Report, SP96-32, pp.89--96, June 1996.
{ Multi-Modal Interface, Speech Recognition, Face Direction Recognition, Non-command Word Rejection }
                                                        
{SP96-33} K. Kato and S. Amano,
``Auditory Evoked Magnetic fields in the perception of Word/Nonword,''
IEICE Technical Report, SP96-33, pp.1--8, July 1996.
{ brain magnetic fields, word/nonword, lexical decision, dipole fitting, auditory perception area }
                                                        
{SP96-34} M. Akagi, N. Takagi, T. Kitamura, N. Suzuki, Y. Fujita and K. Michi,
``Perception of lateral misarticulation and its physical correlates,''
IEICE Technical Report, SP96-34, pp.9--16, July 1996.
{ lateral misarticulation, spectral envelope, LMA analysis-synthesis method, acoustic tube model }
                                                        
{SP96-35} M. Mizushima and K. Itoh,
``Effect of Auto Gain Control and Noise Reduction on Speech Perception of the Hearing Impaired,''
IEICE Technical Report, SP96-35, pp.17--24, July 1996.
{ noise reduction, auto gain control, hearing impaired, digit intelligibility }
                                                        
{SP96-36} M. Yanagida,
``Aging of Speech Listening Ability,''
IEICE Technical Report, SP96-36, pp.25--32, July 1996.
{ Aged, Aging, Mishearing, Synthetic Speech, Hearing Loss, Language Ability }
                                                        
{SP96-37} M. Unoki and M. Akagi,
``A Study on Computational model of Co-modulation Masking Release,''
IEICE Technical Report, SP96-37, pp.1--8, July 1996.
{ co-modulation masking release(CMR), auditory scene analysis, gammatone filter, wavelet filterbank }
                                                        
{SP96-38} S. Amano and T. Kondo,
``Relations between Auditory Word Familiarity and Relative Frequencies of Japanese Words and Moras,''
IEICE Technical Report, SP96-38, pp.9--16, July 1996.
{ mora, word, familiarity, distribution }
                                                        
{SP96-39} H. Mitsumoto, M. Yanagida, H. Otawa and S. Tamura,
``Detection of Ironical Utterance Based on Acoustic Features,''
IEICE Technical Report, SP96-39, pp.17--24, July 1996.
{ Irony, sensitivity, Prosody, Utterance recognition }
                                                        
{SP96-40} M. Abe,
``Speech morphing by gradually changing spectrum parameter and fundamental frequency,''
IEICE Technical Report, SP96-40, pp.25--32, July 1996.
{ morphing, fundamental frequency, speech spectrum }
                                                        
{SP96-41} Y. Ishikawa and T. Ebihara,
``A Network Model of F0 Generation for Japanese Text-to-Speech Systems,''
IEICE Technical Report, SP96-41, pp.33--40, July 1996.
{ speech synthesis, prosody, intonation, pitch frequency control, syntactical analysis }
                                                        
{SP96-42} K. Iwamoto, Y. Xiao and Y. Tadokoro,
``Analysis of a Simplified RLS Algorithm for the Estimation of Sinusoidal Signal,''
IEICE Technical Report, SP96-42, pp.1--8, Sept. 1996.
{ Adaptive algorithm, Estimation of sinusoidal signals, Mean square error, Performance analysis }
                                                        
{SP96-43} T. Deng,
``Design of Linear Phase Variable 2-D Digital Filters Using Real-Complex Decomposition,''
IEICE Technical Report, SP96-43, pp.9--16, Sept. 1996.
{ variable digital filter, real-complex decomposition, constant digital filter }
                                                        
{SP96-44} H. Hasegawa, I. Yamada and K. Sakaniwa,
``Distance from a real Schur polynomial to the set of all real non-Schur polynomials -- A study on the robustness of a real stable polynomial --,''
IEICE Technical Report, SP96-44, pp.17--22, Sept. 1996.
{ Stability, real Schur polynomial, robustness }
                                                        
{SP96-45} M. Ikeda, K. Takeda and F. Itakura,
``Speech Enhancement by Quadrature Comb-filtering,''
IEICE Technical Report, SP96-45, pp.23--30, Sept. 1996.
{ Speech Enhancement, Comb-filter, Quadrature Filter, Orthogonal Transform }
                                                        
{SP96-46} Y. Kishi and T. Funada,
``Voiced/Unvoiced/Mixed Excitation Classification of Continuous Speech using Electroglottogram,''
IEICE Technical Report, SP96-46, pp.31--38, Sept. 1996.
{ electroglottogram, mixed excitation, glottal closure instant }
                                                        
{SP96-47} S. Kajita, K. Takeda and F. Itakura,
``Speech Recognition Using Binaural Subband-Crosscorrelation Analysis,''
IEICE Technical Report, SP96-47, pp.39--46, Sept. 1996.
{ Filter Bank, Cross-correlation, DTW, Microphone Array, Binaural Processing }
                                                        
{SP96-48} S. Hayakawa, K. Takeda and F. Itakura,
``Speaker recognition using the information in the LP-residual signal,''
IEICE Technical Report, SP96-48, pp.47--54, Sept. 1996.
{ LP-residual signal, speaker recognition, period between reference and test utterances, power difference in each subband, harmonic structure }
                                                        
{SP96-49} T. Nishino, K. Takeda and F. Itakura,
``Principal Component Analysis on Head Related Transfer Function,''
IEICE Technical Report, SP96-49, pp.1--8, Sept. 1996.
{ head related transfer function, principal components analysis, Fourier series expansion, interpolation }
                                                        
{SP96-50} N. Takahashi, T. Nakai and H. Suzuki,
``A Speech Production Simulation Taking Account of the nasal tract using Finite Element Method,''
IEICE Technical Report, SP96-50, pp.9--14, Sept. 1996.
{ finite element method, nasal tract, speech production, simulation }

{SP96-51} S. Kobayashi and S. Kitazawa,
``Transcription of Paralinguistic Features and Their Physical Correlates,''
IEICE Technical Report, SP96-51, pp.15--22, Sept. 1996.
{ Transcription, Paralinguistic Features, Prominence, Acoustic Difference, Difference Limen }
 
{SP96-52} T. Sugihara, S. Kajita, K. Takeda and F. Itakura,
``Estimation of DOA Using Zero-Crossing Information,''
IEICE Technical Report, SP96-52, pp.23--30, Sept. 1996.
{ zero-crossing, cross-correlation, time delay between channels, DOA, auditory nerve firing, microphone array }
                                                        
{SP96-53} S. Hayashi, A. Kataoka and S. Kurihara,
``Speech Coding Algorithm and Quality Assessment for V.70 Digital Simultaneous Voice and Data,''
IEICE Technical Report, SP96-53, pp.31--38, Sept. 1996.
{ V.70 DSVD, V.34 MODEM, G.729 CS-ACELP, Medium low bit-rate speech coding }
                                                        
{SP96-54} H. Ito, S. Kajita, K. Takeda and F. Itakura,
``Reduction of LPC Spectrum Dimension Using a Wine-Glass-Type Neural Network,''
IEICE Technical Report, SP96-54, pp.39--44, Sept. 1996.
{ wine-glass-type neural network, LPC spectrum, identity mapping, dimension reduction, word recognition }
                                                        
{SP96-55} N. Inoue, M. Nakamura, S. Sakayori, S. Yamamoto and F. Yato,
``Field Trial of Operator Assisting System,''
IEICE Technical Report, SP96-55, pp.1--6, Oct. 1996.
{ Speech Recognition, Practical System, Field Trial }
                                                        	
{SP96-56} M. Schuster,
``Bi-directional recurrent neural networks for speech recognition,''
IEICE Technical Report, SP96-56, pp.7--12, Oct. 1996.
{ continuous speech recognition, recurrent neural network }
                                                        
{SP96-57} M. Sasanuma and S. Itahashi,
``Classification of Spoken Languages by Fundamental Frequency ,''
IEICE Technical Report, SP96-57, pp.13--22, Oct. 1996.
{ fundamental frequency(F0), polygonal lines, exponential functions, principal component analysis, discriminant analysis }
                                                        
{SP96-58} T. Nakai,
``Analysis of Differential Glottal Flow of Helium Speech,''
IEICE Technical Report, SP96-58, pp.23--28, Oct. 1996.
{ glottal flow, helium speech, vocal cord model, diving }
                                                        
{SP96-59} M. Hashimoto and N. Higuchi,
``Analysis of Acoustic Features Affecting Speaker Identification,''
IEICE Technical Report, SP96-59, pp.29--36, Oct. 1996.
{ speaker individuality, speaker identification, voice conversion }
                                                        
{SP96-60} M. Tamoto and T. Kawabata,
``Face Direction Recognition based on Acoustic Reflection Characteristics,''
IEICE Technical Report, SP96-60, pp.1--4, Nov. 1996.
{ Speech Understanding, Discource Understanding, Face Direction Recognition, JUNO }
                                                        
{SP96-61} T. Taniguchi, S. Kajita, H. Yenia, K. Takeda and F. Itakura,
``Evaluation of the Source Separation Method Based on Minimizing Mutual Information under Real Environments,''
IEICE Technical Report, SP96-61, pp.5--12, Nov. 1996.
{ source separation, blind separation, minimizing mutual information, SDR }
                                                        
{SP96-62} M. Kashihara and Y. Ariki,
``Speech Segmentation under Noisy and Musical Circumstancies by Subspace Method,''
IEICE Technical Report, SP96-62, pp.13--18, Nov. 1996.
{ subspace method, CLAFIC method, projection length, LPC cepstrum, FFT spectrum, white Gaussian noise }
                                                        
{SP96-63} Y. Kawabata,
``How to Determine the Inheritance Factors of BPD Back-off N-gram Smoothing Method,''
IEICE Technical Report, SP96-63, pp.19--24, Nov. 1996.
{ Natural language, n-gram, back-off, JUNO }
                                                        	
{SP96-64} K. Tanaka and H. Kojima, 
``On the Discriminative Feature Extraction from Speech Power Spectra for Speech Recognition,''
IEICE Technical Report, SP96-64, pp.25--30, Nov. 1996.
{ Speech recognition, Power spectrum enhancement, Discriminative dynamic features }
                                                        	
{SP96-65} T. Osanai, H. Kido, T. Kamada and M. Tanimoto,
``Speaker-Independent Vowel Recognition Using Discriminant Function,''
IEICE Technical Report, SP96-65, pp.31--36, Nov. 1996.
{ Discriminant function, Speaker-independent vowel recognition, Text-independent speaker recognition }
                                                        
{SP96-66} M. Sakurai and Y. Ariki,
``Classification and Indexing of News Speech Articles by Keyword Spotting,''
IEICE Technical Report, SP96-66, pp.37--44, Nov. 1996.
{ Spotting, News Speech, Article Classification, Indexing, Database }
                                                        
{SP96-67} L. Zhao, Y. Kobayashi and Y. Niimi,
``Recognition of Chinese Continuous Speech Based on the Integration of Phonetic and Prosodic Information,''
IEICE Technical Report, SP96-67, pp.45--50, Nov. 1996.
{ Recognition of chinese continuous speech, phonetic information, Prosodic information, Integration }
                                                        
{SP96-68} S. Kitazawa, H. Sugiura, D. Shitaoka and S. Kobayashi,
``Studies with the TEMAX for assessment of the rhythmic components in speech,''
IEICE Technical Report, SP96-68, pp.51--58, Nov. 1996.
{ rhythm, prosody, isosyllabism, isochronism, TEMAX, bimoraic foot }
                                                        
{SP96-69} S. Kobayashi and S. Kitazawa,
``Correlates between Feeling of Hight, Loudness and Tempo of Speech and Physical Features,''
IEICE Technical Report, SP96-69, pp.1--8, Dec. 1996.
{ Transcription, Paralinguistic Features, Prominence, Acoustic Difference }
                                                        
{SP96-70} K. Sasaki, N. Miki and Y. Ogawa,
``Evaluation of Estimator with TLS for Vocal Tract-Function by using Vocal Tract Analog Synthesizer,''
IEICE Technical Report, SP96-70, pp.9--14, Dec. 1996.
{ structure of lungs, vocal-tracts analog model, vocal-tract function }
                                                        
{SP96-71} T. Fukada and Y. Sagisaka,
``Automatic Generation of Pronunciation Dictionary Based on Pronunciation Networks,''
IEICE Technical Report, SP96-71, pp.15--22, Dec. 1996.
{ pronunciation dictionary, neural networks, spontaneous speech, HMM, speech recognition }
                                                        
{SP96-72} M. Murata and M. Nagao,
``Resolution of Verb Phrase Ellipsis using Surface Expressions and Examples   ,''
IEICE Technical Report, SP96-72, pp.23--30, Dec. 1996.
{ Verb, Ellipsis, Surface Expression, Example }
                                                        
{SP96-73} M. Kawamori and A. Shimazu,
``On the Interpretation of Redundant Expressions in Spoken Language,''
IEICE Technical Report, SP96-73, pp.31--38, Dec. 1996.
{ Dialogue understanding, redundant utterances, repetition, repair, intonation }
                                                        
{SP96-74} N. Ogata,
``Activation of Information in Dialogue and Information of Pitch,''
IEICE Technical Report, SP96-74, pp.39--46, Dec. 1996.
{ dialogue, pitch, activation of information, word order, ellipsis }
                                                        
{SP96-75} T. Watanabe, M. Araki and S. Doshita,
``Conflict Resolution in Difference of Knowledge and Recognition Errors in Dialogue System,''
IEICE Technical Report, SP96-75, pp.47--52, Dec. 1996.
{ recognition error, misunderstanding, difference of knowledge, conflict resolution }
                                                        
{SP96-76} Y. Ooyama, H. Asano and S. Takagi,
``An Interactive Explainer of Kanji Characters used in Japanese Person's Names and Its Evaluation,''
IEICE Technical Report, SP96-76, pp.53--58, Dec. 1996.
{ Text Generation, Natural Language Processing, Spoken Language Generation, Text-to-Speech System, Kanji Explanation }
                                                        
{SP96-77} H. Yamamoto, T. Kosaka, M. Yamada, Y. Komori and M. Fujita,
``Performance of data input system and users' impressions of the system,''
IEICE Technical Report, SP96-77, pp.59--66, Dec. 1996.
{ speech recognition, data input system, field-test, users' impressions }
                                                        
{SP96-78} Y. Yamaguchi, J. Takahashi, S. Takahashi and S. Sagayama,
``Acoustic Model Adaptation by Taylor Series,''
IEICE Technical Report, SP96-78, pp.1--8, Dec. 1996.
{ acoustic model, environmental noise, noise adaptation, Taylor series }
                                                        
{SP96-79} K. Shinoda and T. Watanabe,
``Acoustic Model Generation Using State Clustering by Information Criterion,''
IEICE Technical Report, SP96-79, pp.9--16, Dec. 1996.
{ speech recognition, Hidden Markov Model, recognition unit, information criteria, Minimum Descritption Length Principle }
                                                        
{SP96-80} T. Hori, M. Katoh, A. Ito and M. Kohda,
``A Study on Improvement of HM-Nets using Decision Tree-based Successive State Splitting,''
IEICE Technical Report, SP96-80, pp.17--24, Dec. 1996.
{ speech recognition, hidden Markov model, context-dependent model, hidden Markov network, phonetic decision tree }
                                                        
{SP96-81} A. Ito and M. Kohda,
``Task adaptation of a stochastic language model for dialogue speech recognition,''
IEICE Technical Report, SP96-81, pp.25--32, Dec. 1996.
{ continuous speech recognition, stochastic language model, N-gram, task adaptation }
                                                        
{SP96-82} K. Yoshida, T. Matsuoka, K. Ohtsuki and S. Furui,
``Large-Vocabulary Continuous Speech Recognition Using Word Trigrams,''
IEICE Technical Report, SP96-82, pp.33--38, Dec. 1996.
{ large vocabulary continuous speech recognition, word trigram, language models }
                                                        
{SP96-83} T. Matsuoka and S. Furui,
``Speech understanding using a statistical translation language model,''
IEICE Technical Report, SP96-83, pp.39--46, Dec. 1996.
{ Speech understanding, Translation, Language modeling, Natural language, Semantic language }
                                                        
{SP96-84} T. Takezawa and T. Morimoto,
``Performance Improvement of Dialogue Speech Recognition Method Using Syntactic Rules and Preterminal Bigrams,''
IEICE Technical Report, SP96-84, pp.47--54, Dec. 1996.
{ Continuous speech recognition, spoken language processing, integrated processing of speech and language, syntactic rules, partial trees, statistical language modeling }
                                                        
{SP96-85} T. Yamada, S. Matsunaga and S. Sagayama,
``Continuous Speech Recognition Using LR Parsing With Effective Hypotheses Merging Mechanism,''
IEICE Technical Report, SP96-85, pp.55--60, Dec. 1996.
{ Speech Recognition, HMM, LR Parser, One-pass Search, Finite State Automaton }
                                                        
{SP96-86} T. Kawahara, C. Lee and B. Juang,
``Key-Phrase Detection and Verification for Flexible Speech Understanding,''
IEICE Technical Report, SP96-86, pp.61--68, Dec. 1996.
{ speech recognition, speech understanding, spoken dialogue processing, word spotting, utterance verification, key-phrase detection }
                                                        
{SP96-87} K. Nishi and S. Ando,
``Harmonics Extraction Methods Based on the Weighted Minimum Square Estimation,''
IEICE Technical Report, SP96-87, pp.1--6, Jan. 1997.
{ Harmonics Extraction, Minimum Square Estimation, Comb Filter, Constant Q, Likelihood Estimation }
                                                        
{SP96-88} T. Takiguchi, S. Nakamura, Q. Huo and K. Shikano,
``Speech Recognition by Adaptation of Model Parameters based on HMM Decomposition in Reverberant Environments,''
IEICE Technical Report, SP96-88, pp.7--12, Jan. 1997.
{ speech recognition, reverberation, hands-free, HMM decomposition }
                                                        
{SP96-89} M. Inoue, T. Yamada, S. Nakamura and K. Shikano,
``Comparative Experiments of Microphone Arrays for Speech Recognition,''
IEICE Technical Report, SP96-89, pp.13--20, Jan. 1997.
{ Speech Recognition, Hands-Free, Microphone Array }
                                                        
{SP96-90} K. Shibata and H. Matsumoto,
``Speaker adaptation based on Tree-Structured Tied-Difference vectors and Their Confidence,''
IEICE Technical Report, SP96-90, pp.21--28, Jan. 1997.
{ Speaker Adaptation, MAP Estimation, Continuous HMM, Tree-Structure, Speech Recognition }
                                                        
{SP96-91} J. Ishii and M. Tonomura,
``Speaker Normalization and Speaker Adaptation Using Linear Regression,''
IEICE Technical Report, SP96-91, pp.29--36, Jan. 1997.
{ linear regression, speaker normalization, speaker adaptation, maximum a posteriori estimation }
                                                        
{SP96-92} T. Matsui, N. Hashimoto, T. Matsuoka and S.Furui,
``N-best based unsupervised speaker adaptation,''
IEICE Technical Report, SP96-92, pp.37--44, Jan. 1997.
{ speech recognition, speaker adaptation, N-best, unsupervised adaptation, instantaneous adaptation }
                                                        
{SP96-93} H. Jiang, K. Hirose, Q. Huo,
``Robust Speech Recognition Based on Bayesian Predictive Approach,''
IEICE Technical Report, SP96-93, pp.45--52, Jan. 1997.
{ plug-in maximum a posteriori (MAP) decision, minimax decision, Bayesian predictive classification, Viterbi BPC, predictive density }
                                                        
{SP96-94} T. Nitta, A. Kawamura, Y. Masai and A. Nakayama,
``High-speed Segment Quantization Based on KL-expansion and Generalized Probabilistic Descent Method,''
IEICE Technical Report, SP96-94, pp.53--60, Jan. 1997.
{ Speech Recognition, HMM, Segment Quantization, Competitive Training, KL-expansion, GPD }
                                                        
{SP96-95} S. Asogawa and N. Akamatsu,
``Intrinsic Mechanism of Vowel Generation,''
IEICE Technical Report, SP96-95, pp.1--8, Jan. 1997.
{ Voice Geenration, Vortex Sound, Vowel }
                                                        
{SP96-96} H. Kawahara and A. Cheveigne,
``Error Free F0 Extraction Method and Its Evaluation,''
IEICE Technical Report, SP96-96, pp.9--18, Jan. 1997.
{ Speech Perception, Pitch, Speech Production, Wavelet, Instantaneous Frequency, Source }
                                                        
{SP96-97} H. Kawahara and I. Masuda,
``Spline-based Approximation of Time-Frequency Representation in STRAIGHT method,''
IEICE Technical Report, SP96-97, pp.19--24, Jan. 1997.
{ Speech perception, Spline, Approximation, Time-frequency representations, Source }
                                                        
{SP96-98} Y. Yoshida, S. Nakajima, K. Hakoda and T. Hirokawa,
``Speech Synthesis by Rule based on Context Dependent Speech Units and the Quality Assessment of Synthesized Speech,''
IEICE Technical Report, SP96-98, pp.25--30, Jan. 1997.
{ waveform speech synthesis, synthesis unit, context dependent phoneme, LBG algorithm, articulation test, intelligibility test }
                                                        
{SP96-99} M. Goto, S. Hangai and K. Miyauchi,
``Figure Voices Recognition Using Digital Cochlear Model,''
IEICE Technical Report, SP96-99, pp.31--38, Jan. 1997.
{ auditory system, digital cochlear model, word recognition, filterbank, figure voices }
                                                        
{SP96-100} H. Kojima and K. Tanaka,
``Extracting Phonemic Structures from Squences of Piecewise Linear Segments,''
IEICE Technical Report, SP96-100 pp.39--44, Jan. 1997.
{ robustness, piecewise linear segment lattices, formation of phonological concepts }
                                                        
{SP96-101} S. Nakagawa and M. Ida,
``A New Measure of Task Complexity -- SMR-Perplexity -- ,''
IEICE Technical Report, SP96-101 pp.45--52, Jan. 1997.
{ speech recognition, task complexity, perplexity, SMR-perplexity }
                                                        
{SP96-102} Y. Noda, S. Matsunaga and S. Sagayama,
``An Approximation Technique in Large-Vocabulary Continuous Speech Recognition Using a Word Graph,''
IEICE Technical Report, SP96-102 pp.53--58, Jan. 1997.
{ word-pair approximation, word graph, large-vocabulary continuous speech recognition }
                                                         
{SP96-103} H. Masataki, Y. Sagisaka, K. Hisaki, T. kawahara 
``Task adaptation using MAP estimation in N-gram Language Modeling,''
IEICE Technical Report, SP96-103 pp.59--64, Jan. 1997.
{ Continuous Speech Recognition, N-gram, Task Adaptation, MAP Estimation }
                                                        
{SP96-104} K. Ito, S. Hayamizu and K. Tanaka,
`` Recognition of transcription from speech samples,''
IEICE Technical Report, SP96-104 pp.65--70, Jan. 1997.
{ Unknown word, recognition, acquisition }
                                                        
{SP96-105} K. Yokoi, T. Kawahara and S. Doshita,
`` Topic Identification of News Speech using Word Cooccurrence Statistics,''
IEICE Technical Report, SP96-105 pp.71--78, Jan. 1997.
{ speech recognition, information retrieval, topic identification, word cooccurrence, news speech retrieval }
                                                        
{SP96-106} D. Iwahashi, Y. Yamashita and R. Mizoguchi,
`` Using pitch patterns in keyword spotting,''
IEICE Technical Report, SP96-106 pp.79--85, Jan. 1997.
{ keyword spotting, prosodic information, pitch pattern, false alarm, DP matching }
                                                        
{SP96-107} O. Tokuyama and T. Taguchi,
``Change in piano tones under different concert pitches ,''
IEICE Technical Report, SP96-107, pp.1--6, Feb. 1997.
{ Piano tone, concert pitch, energy decay, inharmonicity, and listening experiment }
                                                        
{SP96-108} T. Taguti,
``Report of the 1996 International Conference on Computer Music and Music Science (1996 ICCMMS) -- Shanghai Jiao Tong University, Oct. 15-18, 1996, Shanghai, China --,''
IEICE Technical Report, SP96-108, pp.7--10, Feb. 1997.
{ Conference report, computer music, music science, China }
                                                        
{SP96-109} M. Kato,
``On time variation of Harmonicity of musical tones of non percussive musical instruments -- About Fl/Ob/Trp/Vl --,''
IEICE Technical Report, SP96-109, pp.11--16, Feb. 1997.
{ harmonicity, inharmonicity, non percussive tone, vibrato, frequency, pitch, filter }
                                                        	
{SP96-110} G. Ohyama,
``A Study on the Acoustical Characteristics of Ornaments of the Singing Voice on Koto (Japanese Harp) Music,''
IEICE Technical Report, SP96-110, pp.17--22, Feb. 1997.
{ koto song, ormansnts, acoustical analysis, furi, atari }
                                                        
{SP96-111} T. Hikichi and N. Osaka,
``Morphing of Sounds of the Struck Strings, Plucked Strings, and Elastic Media,''
IEICE Technical Report, SP96-111, pp.23--28, Feb. 1997.
{ sound control, morphing, physical models, struck strings, plucked strings, elastic media }
                                                        
{SP96-112} T. Oohashi, E. Nishina, Y. Fuwamoto, N. Kawai and M. Morimoto,
``Hypersonic Effect,''
IEICE Technical Report, SP96-112, pp.29--34, Feb. 1997.
{ Hypersonic Effect, High frequency, α-EEG, Physio-psychological Evaluation }
                                                        
{SP96-113} A.Takaoka,
``Atonal "Modulation" Based on the Diatonic Set,''
IEICE Technical Report, SP96-113, pp.1--6, Feb. 1997.
{}
                                                        
{SP96-114} N. Osaka,
``On a submitted music piece -- Sound morphing --,''
IEICE Technical Report, SP96-114, pp.7--12, Feb. 1997.
{ Morphing, computer music, pitch shift, timbre interpolation, sinusoidal model }
                                                        
{SP96-115} 
       ,''
IEICE Technical Report, SP96-115, pp.13--14, Feb. 1997.
{}
                                                        
{SP96-116} T. Kinoshita, H. Muraoka and H. Tanaka,
``Note recognition using the statistical information about note transition,''
IEICE Technical Report, SP96-116, pp.15--20, Feb. 1997.
{ Music scene analysis, Note transition, Auditory scene analysis, Probabilistic information integration }
                                                        
{SP96-117} K. Kashino andf H. Murase,
``Sound Source Identification Using Adaptive Template Mixtures -- Formulation and Application to Music Stream Segregation --,''
IEICE Technical Report, SP96-117, pp.21--26, Feb. 1997.
{ auditory scene analysis, sound source identification, sound source separation, music scene analysis, automatic music transcription, matched filter }
                                                        
{SP96-118} T. Nakatani, K. Kashino and H. Okuno,
``Sound Stream Segregation from a Mixture of Speech and Background Music ,''
IEICE Technical Report, SP96-118, pp.27--34, Feb. 1997.
{ auditory scene analysis, sound stream segregation, ontology, speech, music, multi-agent system }
                                                        
{SP96-119} W. Zhu and H. Kasuya,
``Study of Perceptual Contributions of Static and Dynamic Features of Vocal Tract Characteristics to Speaker Individuality,''
IEICE Technical Report, SP96-119, pp.35--42, Feb. 1997.
{ Speaker individuality, Formant trajectories, ARX model }
                                                        
{SP96-120} C. Yang and H. Kasuya,
``Invariance and Individuality of the Vowel: Evidence from Articulatory and Acoustic Observations,''
IEICE Technical Report, SP96-120, pp.43--48, Feb. 1997.
{ MR image, vocal tract shape, formant frequencies, invariance, individuality }
                                                        
{SP96-121} T. Abe, T. Kobayashi and S. Imai,
``The IF Spectrogram: An Approach for Time-Frequency Representation of Speech,''
IEICE Technical Report, SP96-121, pp.49--54, Feb. 1997.
{ time-frequency representation, instantaneous frequency, time-warping }
                                                        
{SP96-122} J. Inoue and T. Tsumura,
``On the training effects expressed with reaction times in psychoacoustic threshold measurements,''
IEICE Technical Report, SP96-122, pp.1--8, March 1997.
{ Reaction time, Constant method, UDTR method, Frequency discrimination, Training effect }
                                                        
{SP96-123} M. Unoki and M. Akagi,
``An Extraction Method of the AM complex tone frem Noise-added AM complex tone,''
IEICE Technical Report, SP96-123, pp.9--16, March 1997.
{ Auditory Scene Analysis, Segregation, Kalman filter, Spline Interpolation }
                                                        
{SP96-124} Y. Tsuji, M. Hoshi and T. Ohmori,
``Local patterns of a melody and its applications to retrieval by sensitivity words,''
IEICE Technical Report, SP96-124, pp.17--24, March 1997.
{ song database, sensitivity word, local melody pattern multivariate data analysis }
                                                        
{SP96-125} N. Aoki and T. Ifukube,
``On the validity of 1/f jitter and 1/f shimmer for synthesizing vowels,''
IEICE Technical Report, SP96-125, pp.25--32, March 1997.
{ naturalness of sustained vowels, jitter, shimmer, 1/f fluctuation, speech synthesis, psychoacoustic experiment }
                                                        
{SP96-126} J. Lu, T. Doi, N. Nakamura and K. Shikano,
``Acoustical Characteristics of Vowels of Esophageal Speech,''
IEICE Technical Report, SP96-126, pp.33--40, March 1997.
{ Esophageal speech, Perturbation parameter, Acoustics characteristics, Enhancement }
                                                        
{SP96-127} H. Nishi and Y. Ohira,
``Evaluation of a connectionless speech communication system,''
IEICE Technical Report, SP96-127, pp.1--6, March 1997.
{ internet telephone, speech quality, delay time, packet loss, S/N ratio }
                                                        
{SP96-128} A. Imai, T. Takagi, A. Ando and E. Miyasaka,
``A Study on a Speech-Rate Conversion Method for Synchronizing Speech with Picture,''
IEICE Technical Report, SP96-128, pp.7--14, March 1997.
{ Speech rate conversion, Elderly people, Visual Image, Quasi-Lip-Synchronization }
                                                        
{SP96-129} Y. Shiraki,
``Large deviation theory and Malliavin calculus,''
IEICE Technical Report, SP96-129, pp.15--22, March 1997.
{ stochastic process, large deviation theory, probabilistic differential equation, nonlinear filtering, stochastic calculus of variation, Wiener measure, Malliavin calculus, de Rham cohomology }
                                                        
{SP96-130} K. Tanaka and M. Abe,
``A Text-to-Speech System with Transformation of Spectrum Envelope according to Fundamental Frequency,''
IEICE Technical Report, SP96-130, pp.23--30, March 1997.
{ Text-to-Speech system, fundamental frequency, spectrum envelope, codebook mapping, differential vector }
                                                        
{SP96-131} T. Suzuki and H. Ariizumi,
``Analysis and Synthesis Rules of Emotional Expression,''
IEICE Technical Report, SP96-131, pp.31--38, March 1997.
{ text-to-speech synthesis, emotional expression, classification of emotions, collection of voice data }
                                                        
{SP96-132} Y. Ishikawa,
``Duration Control Method Based on Two Morae Units for Japanese Text-to-Speech Systems,''
IEICE Technical Report, SP96-132, pp.39--44, March 1997.
{ text-to-speech, speech synthesis, prosody, intonation, duration control, timing, rhythm }
                                                        
{SP96-133} T. Ebihara and Y. Ishikawa,
``Pause estimation by network model for text-to-speech synthesis,''
IEICE Technical Report, SP96-133, pp.45--50, March 1997.
{ speech synthesis, pause, prosody, pitch frequency control }
                                                        
{SP96-134} K. Taniguchi, N. Kouno, T. Tokuda and Y. Ikura,
``Speech Recognition with Composition of Non-stationary Noise and Speech,''
IEICE Technical Report, SP96-134, pp.51--57, March 1997.
{ speech recognition, HMM, composition, non-stationary noise }

---
** Computer Music Concert ** 1997.2.20
1. Mirror Stone    -- for Flute, Next computer and DSP (ISPW) --
	Naotoshi Osaka (NTT)
	Fl: Ayako Nishikawa
	pp.1

2. Dum veneris      -- for tape -- 
	Akira Takaoka (Columbia University)
	pp.2

3. Oriental Fantasia -- for EAR HARP, computer, and  4-channel tape (ADAT) --
	Kazuo Uehara (Osaka University of Arts)
	pp.2--3

4. SendMail   -- for sax, piano and computer --
	Masahiro Miwa (IAMAS)
   	Sax: Ryo Noda
   	Pf:  Kazue Nakamura
	pp.4

5. The Remains of the Light 2.1.1.   -- for piano and tape --
	Yoshihiro Kanno  (Waseda University)
   	Pf: Noriko Ohtake
	pp.5

6. Integration #1   -- for multimedia interactive system using percussion solo and two computers --
	Osamu Takashiro  (Kunitachi College of Music)
   	Percussion: Mie Saito
	pp.5--6