SP Subject Index 1992

{SP92-1} T. Ikeda and A. Watanabe,
``Single-resonant analysis-synthesis system and its application for hearing aid,''
IEICE Technical Report, SP92-1, pp.1--8, May 1992

{SP92-2} K. Imai, Y. Ueda and A. Watanabe,
``A study on the effect of hearing compensation based on an assessment of vowel categories,''
IEICE Technical Report, SP92-2, pp.9--16, May 1992

{SP92-3} K. Kuroiwa and Y. Chisaki,
``Noise level reduction at the receiving stage of speech with consideration of a wave-length constant,''
IEICE Technical Report, SP92-3, pp.17--23, May 1992

{SP92-4} M. Hashimoto, T. Kitamura, M. Miyatake and M. Iida,
``Point pitch pattern estimation for short sentences using neural network,''
IEICE Technical Report, SP92-4, pp.25--31, May 1992

{SP92-5} M. Abe and H. Sato,
``Two-stage F0 control model using syllable based units,''
IEICE Technical Report, SP92-5, pp.33--40, May 1992

{SP92-6} N. Kaiki and Y. Sagisaka,
``F0 control based on local phrase structure,''
IEICE Technical Report, SP92-6, pp.41--46, May 1992

{SP92-7} Y. Minoda and A. Watanabe,
``Speech quality conversion by the formant analysis-synthesis system,''
IEICE Technical Report, SP92-7, pp.1--8, May 1992

{SP92-8} Y. Nakamitsu, K. Umezaki and Y. Sonoda,
``Estimations of vocal tract shapes from speech signal,''
IEICE Technical Report, SP92-8, pp.9--16, May 1992

{SP92-9} S. Nakajima,
``English speech synthesis based on multi-level context-oriented-clustering method,''
IEICE Technical Report, SP92-9, pp.17--25, May 1992

{SP92-10} A. Kai and S. Nakagawa,
``Consideration on HMM-based continuous speech recognition using word spotting methods,''
IEICE Technical Report, SP92-10, pp.26--32, May 1992

{SP92-11} M. Katoh and S. Hashimoto,
``The rhythm rules in Japanese based on the center of energy gravity of vowels,''
IEICE Technical Report, SP92-11, pp.33--40, May 1992

{SP92-12} K. Itoh, T. Hirokawa and H. Sato,
``Speech segment power control for synthesis by rules,''
IEICE Technical Report, SP92-12, pp.41--48, May 1992

{SP92-13} T. Ashiya, M. Nakagawa,
``The Recognition System for the Species of Birds Based on Bird Call,''
IEICE Technical Report, SP92-13, pp.1--6, June 1992.

{SP92-14} T. Kobayashi, R. Mine and K. Shirai,
``Modeling of Non-stationary Noise Using Ergodic HMMs and Its Application to Speech Recognition,''
IEICE Technical Report, SP92-14, pp.7--14, June 1992.

{SP92-15} H. Hattori and S. Sagayama,
``Speaker adaptation based on vector field smoothing,''
IEICE Technical Report, SP92-15, pp.15--22, June 1992.

{SP92-16} K. Ohkura, M. Sugiyama, S. Sagayama,
``Speaker adaptation based on transfer vector field smoothing model with continuous mixture density HMMs,''
IEICE Technical Report, SP92-16, pp.23--28, June 1992.

{SP92-17} M. Kohda and M. Katoh,
``A study on word spotting with dynamic programming best-first search,''
IEICE Technical Report, SP92-17, pp.29--36, June 1992.

{SP92-18} M. Kohda and T. Kitamura,
``A study on Viterbi best-first search for isolated word recognition based on discrete HMMs,''
IEICE Technical Report, SP92-18, pp.37--44, June 1992.

{SP92-19} S. Okawa, T. Kobayashi and K. Shirai,
``Phoneme recognition based on mutual information considering duration and connectivity of acoustic events,''
IEICE Technical Report, SP92-19, pp.45--52, June 1992.

{SP92-20} T. Kobayashi, Y. Hamano and K. Shirai,   
``Phoneme recognition using probability ratio between phoneme-group-pair,''
IEICE Technical Report, SP92-20, pp.53--60, June 1992.

{SP92-21}T. Kawahara and S. Doshita, 
``Evaluation of continuous and discrete HMM based on pair-wise Bayes classifiers,''
IEICE Technical Report, SP92-21, pp.61--68, June 1992.

{SP92-22} T. Nitta and S. Tanaka,
``A comparison of subword discrimination method in speaker independent continuous speech recognition,''
IEICE Technical Report, SP92-22, pp.69--76, June 1992.

{SP92-23} M. Tamoto, K. Itou and H. Tanaka,
``A tree-based stochastic phone sequence modeling,''
IEICE Technical Report, SP92-23, pp.77--84, June 1992.

{SP92-24} S. Nakagawa and Y. Ono,
``Estimation of probability density function and posteriori probability and evaluation by vowel recognition,''
IEICE Technical Report, SP92-24, pp.1--8, June 1992.

{SP92-25} M. Inazumi and H. Hasegawa,
``Connected word recognition by recurrent neural networks,''
IEICE Technical Report, SP92-25, pp.9--16, June 1992.

{SP92-26} A. Biem and S. Katagiri,
``Cepstrum liftering based on minimum classification error,''
IEICE Technical Report, SP92-26, pp.17--24, June 1992.

{SP92-27} Y. Niimi, H. Fujiwara and Y. Kobayashi,
``Speech recognition based on the generalized sequential machine,''
IEICE Technical Report, SP92-27, pp.25--30, June 1992.

{SP92-28} Y. Kato and M. Sugiyama,
``Fuzzy partition models for continuous speech recognition,''
IEICE Technical Report, SP92-28, pp.31--38, June 1992.

{SP92-29} A. Ito and S. Makino,
``Word pre-selection using extended RHA method for continuous speech recognition,''
IEICE Technical Report, SP92-29, pp.39--46, June 1992.

{SP92-30} M. Zhou and S. Nakagawa,
``Spoken digit recognition using stochastic context-free grammar,''
IEICE Technical Report, SP92-30, pp.47--54, June 1992.

{SP92-31} N. Kanedera and T. Funada,
``A context free grammar with prosodic and semantic rules for continuous speech recognition,''
IEICE Technical Report, SP92-31, pp.55--62, June 1992.

{SP92-32} T. Yamada, S. Matunaga and K. Shikano,
``Japanese dictation system using character source modeling,''
IEICE Technical Report, SP92-32, pp.63--68, June 1992.

{SP92-33} A. Nagai, J. Takami and S. Sagayama,
``The SSS-LR continuous speech recognition system: integrating SSS-derived allophone models and a phoneme-context-dependent LR parser,''
IEICE Technical Report, SP92-33, pp.69--76, June 1992.

{SP92-34} S. Matsunaga, T. Tsuboi, T. Yamada and K. Shikano,
``Continuous speech recognition for medical diagnoses,''
IEICE Technical Report, SP92-34, pp.77--82, June 1992.

{SP92-35} S. Furui,
``The present status of DARPA spoken language processing projects --- report of the fifth workshop in February 1992 ---,''
IEICE Technical Report, SP92-35, pp.1--6, July 1992.

{SP92-36} T. Kawahara,
``Search algorithms for speech recognition --- introducing A* search --- ,''
IEICE Technical Report, SP92-36, pp.7--14, July 1992.

{SP92-37} Y. Takebayashi,
``Human-computer dialogue using multimedia understanding and synthesis functions,''
IEICE Technical Report, SP92-37, pp.15--22, July 1992.

{SP92-38} K. Itou,
``Speech dialog system,''
IEICE Technical Report, SP92-38, pp.23--30, July 1992.

{SP92-39} M. Hashimoto, A. Hayashi and H. Seki, 
``Intermodal timing relations and audio-visual speech perception,''
IEICE Technical Report, SP92-39, pp.1--6, July 1992.

{SP92-40} C. Lu, T. Nakai and H. Suzuki,
``An estimation of pressure and flow in a model of the Larynx by FVM,''
IEICE Technical Report, SP92-40, pp.,7--14, July 1992.

{SP92-41} S. Kajita and F. Itakura,
``Speech processing using subband-autocorrelation analysis,''
IEICE Technical Report, SP92-41, pp.15--22, July 1992.

{SP92-42} N. Kamiyama, N. Miki and N. Nagai,
``Simulation for mechanic vibration of the vocal tract wall forced by the vibration of the vocal cords,''
IEICE Technical Report, SP92-42, pp.23--29, July 1992.

{SP92-43} K. Akikawa, H. Kawahara and Y. Tohkura,
``Dynamic cepstrum reflecting time-frequency masking characteristics and its application to speech recognition,''
IEICE Technical Report, SP92-43, pp.31--38, July 1992.

{SP92-44} H. Shimodaira,
``A statistical model selection criterion for indirect observation models,''
IEICE Technical Report, SP92-44, pp.39--46, July 1992.

{SP92-45} H. Watanabe, J. Murakami and M. Sugiyama,
``Unknown-multiple signal source clustering problem,''
IEICE Technical Report, SP92-45, pp.47--54, July 1992.

{SP92-46} S. Taira,
``Improvement of noise robustness of neural network by generalization,''
IEICE Technical Report, SP92-46, pp.55--62, July 1992.

{SP92-47} T. Kaburagi,
``Observation techniques of articulatory motions and models of speech production,''
IEICE Technical Report, SP92-47, pp.1--8, Sept. 1992.

{SP92-48} T. Nomura,
``Basic techniques of synthesis-by-rule and their application in communication,''
IEICE Technical Report, SP92-48, pp.9--15, Sept. 1992.
{ speech synthesis, synthesis-by-rule, text to speech conversion }

{SP92-49} J. Takami,
``The hidden Markov model on speech recognition and approaches to improve the model accuracy''
IEICE Technical Report, SP92-49, pp.17--24, Sept. 1992.
{ speech recognition, hidden Markov model, phoneme-context-dependent HMM, hidden Markov network, successive state splitting }

{SP92-50} S. Takahashi, Y. Minami, T. Matsuoka and K. Shikano,
``Phoneme HMMs using frame correlations,''
IEICE Technical Report, SP92-50, pp.1--8, Sept. 1992.

{SP92-51} N. Iwahashi and Y. Sagisaka,
``Speech segment network approach for an optimal synthesis unit set,''
IEICE Technical Report, SP92-51, pp.9--16, Sept. 1992.

{SP92-52} T. Kosaka, J. Takami and S. Sagayama,
``Speaker-independent speech recognition and speaker adaptation using speaker-mixture SSS algorithm,''
IEICE Technical Report, SP92-52, pp.17--24, Sept. 1992.

{SP92-53} K. Kita, T. Morimoto, K. Ohkura and S. Sagayama,
``Evaluation of the HMM-LR speech  recognition system against continuous sentential utterances with the aid of stochastic linguistic knowledge,''
IEICE Technical Report, SP92-53, pp.25--32, Sept. 1992.

{SP92-54} Y. Nejime, H. Ikeda and Y. Kumagai,
``A development of portable DSP system for speech processing to aid senior's hearing,''
IEICE Technical Report, SP92-54, pp.33--40, Sept. 1992.

{SP92-55} A. Nakamura, N. Seiyama, R. Ikezawa, T. Takagi and E. Miyasaka,
``Real time voice speed converting system without impairment in quality,''
IEICE Technical Report, SP92-55, pp.41--48, Sept. 1992.

{SP92-56} R. Ikezawa, A. Nakamura, N. Seiyama, T. Takagi and E. Miyasaka,
``A method of absorbing temporal enlargement of speech lengths in the voice speed converting system for elderly,''
IEICE Technical Report, SP92-56, pp.49--56, Sept. 1992.

{SP92-57} Y. Hirata, T. Ifukube, T. Izumi, M. Sakajiri and J. Matushima,
``A speech signal processor using a DSP for an extra-cochlear prosthetic device for the profoundly hearing impaired,''
IEICE Technical Report, SP92-57, pp.57--62, Sept. 1992.

{SP92-58} T. Takeda and H. Oishi,
``Development of upper extremity orthosis by pneumatic control and it's applications,''
IEICE Technical Report, SP92-58, pp.63--70, Sept. 1992.

{SP92-59} C. Wada, T. Ifukube and T. Izumi,
``A basic study of a tactile vocoder using touch sensations imagined by quality of materials,''
IEICE Technical Report, SP92-59, pp.71--78, Sept. 1992.

{SP92-60} F. Hosoe, S. Ino, S. Shimizu, T. Izumi, M. Takahashi and T. Ifukube, 
``A study on discrimination characteristics of texture for a tele-existence of tactile sensation,''
IEICE Technical Report, SP92-60, pp.79--86, Sept. 1992.

{SP92-61} H. Sagawa, H. Sakou and M. Abe, 
``Sign language translation system using continuous DP matching,''
IEICE Technical Report, SP92-61, pp.1--8, Sept. 1992.

{SP92-62} H. Kasamatu, K. Kamata and T. Tanokami,
``Study on the characterization ability of mouth shape in signing with speech,''
IEICE Technical Report, SP92-62, pp.9--16, Sept. 1992.

{SP92-63} Y. Nagashima, M. Terauchi, G. Ohwa and H. Nagashima,
``Investigation about basis design by Japanese sign language conversion concept term dictionaries,''
IEICE Technical Report, SP92-63, pp.17--22, Sept. 1992.

{SP92-64} Y. Nagashima, T. Onodera, H. Nagashima, M. Terauchi and G. Ohwa,  
``A study of recognition method of Japanese finger spelling,''
IEICE Technical Report, SP92-64, pp.23--30, Sept. 1992.

{SP92-65} K. Mitobe, M. Takahashi, M. Kato, N. Nagai, T. Izumi, J. Matushima and Y. Yamamoto,
``Consideration of the indication movements to optic and acoustic stimulation and construction of special perception,''
IEICE Technical Report, SP92-65, pp.31--38, Sept. 1992.

{SP92-66} H. Masuda, H. Ohtani, S. Nakaoka, H. Kozuka and K. Miyazaki,
``MANU(motif architect for nonexpert users):  GUI builder for end users,''
IEICE Technical Report, SP92-66, pp.39--46, Sept. 1992.

{SP92-67} H. Nishiyama, K. Aiba, T. Yokoyama and Y. Matsushita,
``An interface for image retrieval depending on sketch,''
IEICE Technical Report, SP92-67, pp.47--54, Sept. 1992.

{SP92-68} M. Oda,
``An image retrieval system that uses human cognitive properties,''
IEICE Technical Report, SP92-68, pp.55--62, Sept. 1992.

{SP92-69} S. Sumino, T. Miyataka and H. Ueda,
``A user adaptation model for supporting motion picture authoring,''
IEICE Technical Report, SP92-69, pp.63--70, Sept. 1992.

{SP92-70} K. Okada, M. Uyama and H. Ueda, 
``User adaptation by a task intention identification,''
IEICE Technical Report, SP92-70, pp.71--78, Sept. 1992.

{SP92-71} T. Mori, M. Kitano and T. Kyoden,
``A module providing context sensitive guidance with application program,''
IEICE Technical Report, SP92-71, pp.79--86, Sept. 1992.

{SP92-72} T. Iwasawa, T. Yakoh and Y. Anzai, 
``MACS:  response mechanism using multicast in a radio network,''
IEICE Technical Report, SP92-72, pp.87--94, Sept. 1992.

{SP92-73} K. Sumida and T. Kitamura,
``Speaker identification using feature-map of speech and predication error,''
IEICE Technical Report, SP92-73, pp.1--6, Oct. 1992.
{ speaker identification, text-independent, vector quantization, feature-map, predication network }

{SP92-74} J. Murakami, M. Sugiyama and H. Watanabe,
``Study of unknown-multiple signal source clustering problem using ergodic HMM,''
IEICE Technical Report, SP92-74, pp.7--14, Oct. 1992.
{ hidden Markov model, multiple signal source, speaker clustering, ergodic HMM }

{SP92-75} Y. Miyazawa, K. Ohkura and S. Sagayama,
``Unsupervised speaker adaptation using all-phoneme ergodic HMM,''
IEICE Technical Report, SP92-75, pp.15--20, Oct. 1992.
{ unsupervised speaker adaptation,  phoneme bigram, maximum likelihood method, all-phoneme ergodic HMM, transfer vector field smoothing }

{SP92-76} H. Singer and S. Sagayama,
``Matrix parser and its application to HMM-based speech recognition,''
IEICE Technical Report, SP92-76, pp.21--26, Oct. 1992.
{ speech recognition, hidden Markov model, matrix parser, CYK parser }

{SP92-77} H. Ohmura,
``Pitch extraction and segmentation by voice fundamental wave filtering,''
IEICE Technical Report, SP92-77, pp.27--33, Oct. 1992.
{ voice fundamental, filtering, pitch extraction, pitch filter, segmentation, fine pitch contour }

{SP92-78} H. Wang,
``Acoustic signal processing of reverberant signals,''
IEICE Technical Report, SP92-78, pp.35--42, Oct. 1992.
{ reverberation, dereverberation, acoustic inverse filter, microphone array }

{SP92-79} T. Hirahara,
``Auditory system as a speech analyzer,''
IEICE Technical Report, SP92-79, pp.43--50, Oct. 1992.
{ auditory system, speech analysis }

{SP92-80} K. Funahashi,
``On the recurrent neural networks,''
IEICE Technical Report, SP92-80, pp.51--58, Oct. 1992.

{SP92-81} T. Irino,
``Speech signal processing using wavelet transform,''
IEICE Technical Report, SP92-81, pp.59--66, Oct. 1992.
{ wavelet transform, STFT, QMF, auditory model, signal analysis/synthesis, signal reconstruction }

{SP92-82} R. Kondo and F. Itakura,
``Segment quantization using multiple speaker codebook,''
IEICE Technical Report, SP92-82, pp.1--8, Oct. 1992.
{ very low bit rate, segment, multiple speaker, PNN algorithm, spectral distortion, DP matching }

{SP92-83} R. Wang, Y. Wu, M. Tsukamoto, K. Kamei and K. Inoue,
``Signal processing for  Chinese conversation CAI system --- methods for varying the  four intonation in Chinese language --- ,''
IEICE Technical Report, SP92-83, pp.9--16, Oct. 1992.
{ conversation CAI, tong language, learning of tong, mixing a sound, tonal change, formant frequency }

{SP92-84} T. Yamamoto, T. Funada and T. Makino,
``Compression of LSP parameters using layered neural networks,''
IEICE Technical Report, SP92-84, pp.17--24, Oct. 1992.
{ layered neural networks, LSP parameter, dimensional compression, vector quantization }

{SP92-85} M. Fukumoto, H. Kubota and S. Tsuji,
``New application of block adaptive algorithm using conjugate-gradient method in noisy environment,''
IEICE Technical Report, SP92-85, pp.25--31, Oct. 1992.
{ block adaptive algorithm, conjugate-gradient method, noise, number of repetition }

{SP92-86} H. Sawatari, H. Wang and F. Itakura,
``Recovering of reverberated speech by sub-band deconvolution processing,''
IEICE Technical Report, SP92-86, pp.33--40, Oct. 1992.
{ recovering of reverberated speech, blind deconvolution, sub-band processing, inverse filter estimation, inverse filter length controlling, sub-band selection }

{SP92-87} Y. Oguro, K. Tsuchiya and T. Ishidate,
``A method of pruned FET and its applications,''
IEICE Technical Report, SP92-87, pp.41--48, Oct. 1992.
{ fast Fourier transform, decimation-in-time, decimation-in-frequency, pruning }

{SP92-88} T. Yoshikawa and H. Maruyama,
``Output roundoff error of state-space digital filters using digital signal processors,''
IEICE Technical Report, SP92-88, pp.49--54, Oct. 1992.
{ digital signal processors, state-space digital filters, output roundoff error, word length }

{SP92-89} T. Yoshikawa and M. Yamazaki,
``Analysis of the roundoff noise for cascade form state-space digital filters,''
IEICE Technical Report, SP92-89, pp.55--62, Oct. 1992.
{ cascade form, roundoff noise, optimum scaling, state-space model }

{SP92-90} H. Honma, H. Yamazaki and M. Sagawa,
``A real filter bank with alias-free characteristic at equally spaced frequency points,''
IEICE Technical Report, SP92-90, pp.63--68, Oct. 1992.
{ filter bank, parallel processing, polyphase structure }

{SP92-91} T. Jitsuhiro and F. Itakura,
``Improvement of speech quality on 8kb/s LD-CELP speech coding system ,''
IEICE Technical Report, SP92-91, pp.69--76, Oct. 1992.
{ low delay, LD-CELP, 8kb/s, high frequency envelop, sinusoidal excitation, ΔLSP coefficients }

{SP92-92} K. Satoh, H. Nagabuti and N. Kitawaki,
``Artificial conversational speech signal generation method for measuring characteristics of devices operated by speech signals,''
IEICE Technical Report, SP92-92, pp.1--8, Nov. 1992.
{ speech quality assessment, artificial speech signal, conversational speech,  speech detector }

{SP92-93} N. Asanuma and H. Nagabuchii,
``A method of producing reference signals for evaluation of coded speech quality,''
IEICE Technical Report, SP92-93, pp.9--16, Nov. 1992.
{ speech quality, reference signal, low-bit-rate, modulated noise reference }

{SP92-94} T. Yamazaki and H. Irii,
``Proposal of objective assessment method for telecommunication speech quality using pattern recognition technique,''
IEICE Technical Report, SP92-94, pp.17--24, Nov. 1992.
{ speech quality, quality estimation, code error, pattern recognition, clustering, cepstrum }

{SP92-95} S. Taira,
``Recognition of rhythmic temporal patterns in the use of neural network model,''
IEICE Technical Report, SP92-95, pp.1--8, Dec. 1992.
{ audition, neural network }

{SP92-96} F. Martin, K. Shikano, Y. Minami and Y. Okabe,
``Recognition of noisy speech by composition of hidden Markov models,''
IEICE Technical Report, SP92-96, pp.9--16, Dec. 1992.
{ noisy speech, HMM composition, LPC cepstrum }

{SP92-97} H. Imose and H. Matsumoto,
``Improvement of a continuous density HMM on noise robustness by frequency weighting technique,''
IEICE Technical Report, SP92-97, pp.17--24, Dec. 1992.
{ speech recognition, frequency-weighting, noise robustness, hidden Markov model }

{SP92-98} M. Katoh and M. Kohda, 
``A study on cost estimates for word spotting with dynamic programming best-first search,''
IEICE Technical Report, SP92-98, pp.25--32, Dec. 1992.
{ speech recognition, DTW, word spotting, graph search, best-first search, A*search, }

{SP92-99} M. Kohda and S. Monzen,
``A study on Viterbi best-first search for phrase recognition,''
IEICE Technical Report, SP92-99, pp.33--40, Dec. 1992.
{ speech recognition, hidden Markov model, Viterbi algorithm, graph search, best-first search, A*search }

{SP92-100} T. Kawahara, S. Matsumoto, N. Nukaga and S. Doshita, 
``Evaluation of A*search using word-pair constraint as heuristics,''
IEICE Technical Report, SP92-100, pp.41--47, Dec. 1992.
{ continuous speech recognition, HMM, LR parser, A*search, word-pair grammar }

{SP92-101} T. Koshikawa and S. Nakagawa,
``Speaker adaptation for continuous parameter HMM by maximum a posteriori probability estimation,''
IEICE Technical Report, SP92-101, pp.49--56, Dec. 1992.
{ continuous parameter HMM, speaker adaptation, MAP estimation, covariance matrix training }

{SP92-102} E. Willems, T. Kosaka, J. Takami and S. Sagayama,
``A dynamic approach to speaker adaptation of hidden Markov networks for speech recognition,''
IEICE Technical Report, SP92-102, pp.57--64, Dec. 1992.
{ speech recognition, speaker adaptation, hidden Markov network }

{SP92-103} K. Aikawa, H. Kawahara and Y. Tohkura,
``Speech recognition using dynamic cepstrum with continuous mixture HMMs,''
IEICE Technical Report, SP92-103, pp.1--8, Dec. 1992.
{ auditory model, speech recognition, hidden Markov model, dynamic feature }

{SP92-104} T. Yoshimura, S. Hayamizu and K. Tanaka,
``Identification of word accent patterns by concatenation of more hidden Markov models,''
IEICE Technical Report, SP92-104, pp.9--14, Dec. 1992.
{ more HMMs, fundamental frequency, accent patterns }

{SP92-105} S. Nakagawa and H. Suzuki,
``VQ-distortion based HMM and speaker-independent spoken digit recognition,''
IEICE Technical Report, SP92-105, pp.15--22, Dec. 1992.
{ HMM, VQ-distortion based HMM, speaker-independent, speech recognition, spoken digit recognition }

{SP92-106} S. Okawa, T. Kobayashi and K. Shirai,
``A study on word spotting using phonemic likelihood based on mutual information,''
IEICE Technical Report, SP92-106, pp.23--30, Dec. 1992.
{ continuous speech recognition, word spotting, DP matching, island driven search, information criteria }

{SP92-107} K. Fukuzawa, Y. Kato and M. Sugiyama,
``Speaker-independent continuous speech recognition using FPM-LRs,''
IEICE Technical Report, SP92-107, pp.31--38, Dec. 1992.
{ continuous speech recognition, speaker independent, neural network, FPM, FPM-LR }

{SP92-108} Y. Minami, T. Yamada and K. Shikano,
``Large Vocabulary Continuous Speech Recognition Algorithm for Telephone Directory Assistance Task,''
IEICE Technical Report, SP92-108, pp.39--46, Dec. 1992.
{ HMM, continuous speech recognition, large vocabulary speech recognition, telephone directory assistance, LR parser }

{SP92-109} S. Seto, Y. Nagata, H. Shinchi, H. Hashimoto and Y. Takebayashi,
``Development of spontaneous speech dialogue system TOSBURG II,''
IEICE Technical Report, SP92-109, pp.47--54, Dec. 1992.
{ spontaneous interaction, speech dialogue, multimodal, adaptive filter }

{SP92-110} T. Hiramatsu, H. Yoshida, Y. Nomura, Y. Yamashita and R. Mizoguchi, 
``Using knowledge of topics for understanding of spoken dialog,''
IEICE Technical Report, SP92-110, pp.55--62, Dec. 1992.
{ utterance prediction, topic, dialog model, spoken dialog, speech understanding }

{SP92-111} K. Takagi, N. Houra and S. Itahashi,
``Characteristics of various segment durations in simulated dialog speech,''
IEICE Technical Report, SP92-111, pp.63--70, Dec. 1992.
{ dialog speech, prosody, pause, speech segment, ASJ Continuous Speech Corpus for Research }

{SP92-112} K. Yoshimoto,
``Recognizing and synthesizing prosodic information by means of typed unification grammar,''
IEICE Technical Report, SP92-112, pp.71--78, Dec. 1992.
{ typed unification grammar, prosody, recognition, synthesis, accent, intonation }

{SP92-113} H. Lucke,
``A method for inferring stochastic context-free grammars using the theory of Bayesian causal trees,''
IEICE Technical Report, SP92-113, pp.79--86, Dec. 1992.
{ language modeling, grammar inference, Bayesian inference }

{SP92-114} K. Tanaka and S. Itahashi,
``Discrimination among Japanese dialects using fundamental frequency pattern,''
IEICE Technical Report, SP92-114, pp.87--94, Dec. 1992.
{ fundamental frequency, AMDF method, DP procedure }

{SP92-115} T. Komori and S. Katagiri,
``An optimal spotter design method to minimize false alarms and mis-detection,''
IEICE Technical Report, SP92-115, pp.1--6, Jan. 1993.
{ speech recognition, word spotting, generalized probabilistic descent method, dynamic time warping }

{SP92-116} T. Munetsugu, T. Kawahara, M. Araki and S. Doshita,
``Keyword spotting for spontaneous speech understanding,''
IEICE Technical Report, SP92-116, pp.7--14, Jan. 1993.
{ spontaneous speech understanding, HMM, keyword spotting, phone-conjunct model, word-pair model }

{SP92-117} I. Donescu, Y. Kato and M. Sugiyama,
``Speaker-independent features extracted from a neural network and their evaluation in speech recognition,''
IEICE Technical Report, SP92-117, pp.15--22, Jan. 1993.
{ feature extraction, speaker-independent, speech recognition, neural networks, HMMs, speaker normalization }

{SP92-118} T. Kawabata,
``Speaker-independent speech recognition using nonlinear predictor codebooks,''
IEICE Technical Report, SP92-118, pp.23--30, Jan. 1993.
{ speech recognition, phoneme recognition, neural network }

{SP92-119} T. Yamadsa, S. Matsunaga and K. Shikano,
``Japanese character and part-of-speech source modeling,''
IEICE Technical Report, SP92-119, pp.31--36, Jan. 1993.
{ speech recognition, language processing, stochastic language model, dictation, part of speech }

{SP92-120} T. Nitta, Y. Masai, J. Iwasaki, S. Tanaka, H. Kamio, Y. Hara and H. Matsuura,
``Applying a spontaneous speech input device and a designation device to multimodal dialogue systems,''
IEICE Technical Report, SP92-120, pp.37--42, Jan. 1993.
{ user interface, multimodal dialogue, speech recognition, spontaneous speech, speech synthesis }

{SP92-121} Y. Moriya, T. Abeno, M. Yamamoto and S. Nakagawa,
``A spoken dialog system with prediction of the next user utterance,''
IEICE Technical Report, SP92-121, pp.43--50, Jan. 1993.
{ spoken dialog system, continuous speech recognition, prediction of next user utterance, syntax and word prediction, perplexity }

{SP92-122} A. Nagai, K. Yamaguchi, J. Takami, K. Ohkura, T. Kosaka, K. Fukuzawa,  Y. Kato, H. Singer, J. Murakami, M. Sugiyama, S. Sagayama, J. Hosaka, T. Morimoto, K. Kita, H. Hattori, Y. Komori, H. Sawai, T. Hanazawa, S. Nakamura, A. Kai, Y. Minami, T. Kawabata, K. Shikano and A. Kurematsu, 
``ATREUS:  performance of continuous speech recognition systems at ATR interpreting telephony research laboratories,''
IEICE Technical Report, SP92-122, pp.51--58, Jan. 1993.
{ continuous speech recognition, HMM, Neural Network, LR parser, speaker adaptation, speaker independent speech recognition }

{SP92-123} K. Yamaguchi and S. Sagayama,
``A trainable search strategy for continuous speech recognition using a neural network,''
IEICE Technical Report, SP92-123, pp.1--7, Jan. 1993.
{ continuous speech recognition, neural network, search method, HMM-LR }

{SP92-124} S. Kon and J. Miwa,
``Phoneme recognition using time pattern of a posteriori probability of phoneme categories,''
IEICE Technical Report, SP92-124, pp.9--16, Jan. 1993.
{ phoneme recognition, a posteriori probability, time pattern, hierarchical network units, non-linear transformation }

{SP92-125} M. Inazumi and H. Hasegawa,
``Continuous spoken digit recognition by recurrent neural networks,''
IEICE Technical Report, SP92-125, pp.17--24, Jan. 1993.
{ recurrent neural networks, speech recognition, selective attention, attractor, degeneration, digit, syllable }

{SP92-126} K. Iso,
``Speech recognition using dynamical model of speech production,''
IEICE Technical Report, SP92-126, pp.25--32, Jan. 1993.
{ speech recognition, speech production, dynamical model, coarticulation, maximum a posteriori }

{SP92-127} K. Takeda and T. Konuma,
``On the usage of garbage HMMs in understanding spontaneous speech,''
IEICE Technical Report, SP92-127, pp.33--40, Jan. 1993.
{ continuous speech recognition, spontaneous speech, HMM }

{SP92-128} T. Matsui and S. Furui,
``Speaker recognition using concatenated phoneme models,''
IEICE Technical Report, SP92-128, pp.41--47, Jan. 1993.
{ speaker recognition, text-prompted, speaker-specific phoneme model, hidden Markov model, likelihood normalization }

{SP92-129} T. Seino and S. Nakagawa,
``Spoken language identification by ergodic hidden Markov model,''
IEICE Technical Report, SP92-129, pp.49--56, Jan. 1993.
{ ergodic HMM, text independent, speaker independent, language identification }

{SP92-130} T. Yamaguchi, M. Nagano and M. Nakagawa,
``A study of fractal properties of vocal sounds --- fractal dimensions of vowels ---,''
IEICE Technical Report, SP92-130, pp.57--64, Jan. 1993.
{ Vocal Sounds, Fractal Dimensions, Lyapunov Exponents }

{SP92-131} H. Uwakoto, Y. Kobayashi and Y. Niimi,
``Acoustic analysis and modeling of emotional expressions in speech,''
IEICE Technical Report, SP92-131, pp.65--72, Jan. 1993.
{ acoustic analysis of emotional, modeling of emotional expressions }

{SP92-132} S. Fujiwara and M. Sugiyama,
``A phoneme labelling workbench using HMMs and expert system techniques,''
IEICE Technical Report, SP92-132, pp.73--80, Jan. 1993.
{ labelling, segmentation, speech database, spectrogram reading knowledge, workbench }

{SP92-133} K. Mano, T. Morita, S. Miki and H. Ohmuro,
``Studies on a halfrate speech codec for mobile telephones,''
IEICE Technical Report, SP92-133, pp.1--8, Feb. 1993.
{ mobile telephone, speech codec, low bitrate, CELP, pitch synchronization, random excitation }

{SP92-134} S. Ono,
``A study on 3.6kb/s speech coding based on structured vector quantization,''
IEICE Technical Report, SP92-134, pp.9--16, Feb. 1993.
{ speech coding, vector-scalar quantization, differential coding, lattice quantization }

{SP92-135} T. Morohashi, T. Shimamura and J. Suzuki,
``Improvement in quality of voice picked up from outer skin of larynx by high frequency emphasis,''
IEICE Technical Report, SP92-135, pp.17--24, Feb. 1993.
{ noisy environment, throat microphone, voice from outer skin of larynx, speech quality, long term spectrum }

{SP92-136} N. Miki, M. Suzuki, K. Takemura and N. Nagai, 
``Speech analysis with MSLP method,''
IEICE Technical Report, SP92-136, pp.25--30, Feb. 1993.
{ speech analysis, short-time analysis, error analysis, formant estimation, Lp approximation }

{SP92-137} J. Dang, K. Honda and H. Suzuki,
``Morphological measurement of the nasal cavity and analysis of nasal resonance characteristics,''
IEICE Technical Report, SP92-137, pp.1--8, March 1993.
{ the nasal cavity, morphological measurement of the nasal cavity, nasal resonance characteristics, analysis of magnetic resonance image, measurement of the phonatory organ, the paranasal cavities }

{SP92-138} A. Usui, S. Satoh, M. Ogata, H. Itoh, H. Nakahara and R. Iijima,
``The influence on lingual articulation and mandibular movement of glossectomy patients wearing special artificial palates; simultaneous observation of lingual articulation, the mandibular movement and nature of speech sound,''
IEICE Technical Report, SP92-138, pp.9--16, March 1993.
{ glossectomy, artificial palate, lingual articulation, mandibular movement, simultaneous }

{SP92-139} K. Mori, M. Haranishi and Y. Sonoda,
``Parametoric description of lip articulation,''
IEICE Technical Report, SP92-139, pp.17--24, March 1993.
{ lip articulation, lip frontal contour, lip side view contour, image processing }

{SP92-140} K. Ogata and Y. Sonoda,
``Articulatory measuring system using magnetometer sensors ---studies of sensor arrangements for reducing errors caused by tilting and lateral movements of a magnet---,''
IEICE Technical Report, SP92-140, pp.25--32, March 1993.
{ speech production process, articulatory dynamics, measuring articulatory movements, application of magnetometer sensor }

{SP92-141} T. Kaburagi and M. Honda,
``Prediction of the tongue shape by using the magnetic multiple-point position sensing,''
IEICE Technical Report, SP92-141, pp.33--40, March 1993.
{ articulatory movement of tongue, magnetic position sensing, tongue shape }

{SP92-142} E. Vatikiotis-Bateson, M. Hirayama, Y. Wada and M. Kawato,
``Computational modeling of speech production: concept meets phenomenon,''
IEICE Technical Report, SP92-142, pp.41--47, March 1993.
{ speech production, articulatory targets, neural networks, motor control, physiology }

{SP92-143} H. Yehia and F. Itakura,
``Dynamic vocal tract shape determination from formant frequences using two-dimensional Fourier analysis,''
IEICE Technical Report, SP92-143, pp.49--56, March 1993.
{ two-dimensional Fourier series, formants, cross-sectional vocal-tract area }

{SP92-144} M. Honda and T. Kaburagi,
``Estimation of articulatory-to-acoustic mapping using artificial neural network model,''
IEICE Technical Report, SP92-144, pp.57--64, March 1993.
{ articulatory-to-acoustic mapping, neural network, articulatory measurements }

{SP92-145} H. Kato, M. Tsuzaki and Y. Sagisaka,
``Acceptability for durational modification of segments in words,''
IEICE Technical Report, SP92-145, pp.65--72, March 1993.
{ speech perception, speech synthesis, perception of duration, control rules for segmental duration }

{SP92-146} Y. Nakajima and T. Sasaki,
``Perceptual transfer of onsets and offsets between crossing glide tone components,''
IEICE Technical Report, SP92-146, pp.73--80, March 1993.
{ auditory perception, perceptual organization, gliding tones, temporal gap }

{SP92-147} J. Kawamata, K. Tamaribuchi, K. Miyoshi and S. Saito,
``Speaker recognition based on time-varying characteristics of speech spectrum,''
IEICE Technical Report, SP92-147, pp.1--7, March 1993.
{ speaker recognition, time-varying characteristic, LPC cepstrum, similarity measure, time-warping procedure, vector quantization }

{SP92-148} S. Kajita and F. Itakura,
``Speech processing using auditory subband-autocorrelation analysis,''
IEICE Technical Report, SP92-148, pp.9--16, March 1993.
{ filterbank, autocorrelation, cochlear filter, bark scale, DP matching, smoothed group delay spectrum }

{SP92-149} Y. Yoshizumi, T. Mekata, Y. Yamada, R. Suzuki, Y. Tanaka, A. Kawano and S. Funasaka,
``Speech enhancement algorithms for hearing impaired subjects---an evaluation for hearing impaired subjects---,''
IEICE Technical Report, SP92-149, pp.17--24, March 1993.
{ consonant enhancement, temporal masking, hearing impaired, hearing aid, intelligibility }

{SP92-150} Y. Nejime, H. Ikeda, T. Imamura, T. Izumi, T. Ifukube and J. Matsushima,
``Evaluation of speech-rate conversion method by hearing-impaired listeners,''
IEICE Technical Report, SP92-150, pp.25--31, March 1993.
{ elders, hearing-impaired, speech-rate conversion, intelligibility }