SP Subject Index 1993

{SP93-1} M. Hanawa and T. Hasegawa,
``On the noisy channel speech coding method --- transition constrained vector quantization --- ,''
IEICE Technical Report, SP93-1, pp.1--6, May 1993.
{ vector quantization, viterbi decoding, error correcting codes, robust coding, hidden Markov models }

{SP93-2} T. Sato and I. Akahori and K. Kobata and H. Sato,
``The fast vector quantization using a k-dimensional tree,''
IEICE Technical Report, SP93-2, pp.7--12, May 1993.
{ vector quantization, k-dimensional tree, fine k-dimensional tree, generalized FBF, ∞ norm }

{SP93-3} N. Urabe and S. Hangai and K. Miyauchi,  
``Speaker identification using multipulse codebooks,''
IEICE Technical Report, SP93-3, pp.13--18, May 1993.
{ speaker identification, multipulse excitation model, vector quantization }

{SP93-4} N. Kunieda, T. Shimamura, J. Suzuki and H. Yashima, 
``Improvement in signal-to-noise ratio by SPAD (speech processing system by use of auto-difference function),''
IEICE Technical Report, SP93-4, pp.19--24, May 1993.
{ SPAD, SPAC, difference function, AMDF, noise reduction, SNR improvement }

{SP93-5} J. Suzuki, E. Sano, T. Shimamura, and H. Yashima,  
``Precise extraction of fundamental frequency by use of harmonic structure of voiced speech,''
IEICE Technical Report, SP93-5, pp.25--30, May 1993.
{ pitch extraction, fundamental frequency, pitch period, harmonics, spectrum }

{SP93-6} T. Takagi, N. Seiyama and E. Miyasaka,
``A method of automatic pitch segmentation for pitch-synchronous speech processing,''
IEICE Technical Report, SP93-6, pp.31--36, May 1993.
{ pitch period, autocorrelation, primary pitch extraction, low-pass filtering, automatic pitch segmentation } 

{SP93-7} M. Abe and H. Sato,
``The acoustic and prosodic characteristics of different speaking styles,''
IEICE Technical Report, SP93-7, pp.37--42, May 1993.
{ speech synthesis-by-rule, Speaking styles, prosody, formant frequency }

{SP93-8} 
``Telephone dialogue collection using a voice activated extension telephone switching system,''
IEICE Technical Report, SP93-8, pp.43--48, May 1993.
{ Continuous speech recognition, Telephone, Dialogue }

{SP93-9} H. Kawai and N. Higuchi and T. Simizu and S. Yamamoto,
``A study of a text-to-speech system based on waveform splicing,''
IEICE Technical Report, SP93-9, pp.49--54, May 1993.
{ speech synthesis, waveform splicing, PSOLA }

{SP93-10} S. Takahashi,
``Speech recognition using HMMs --- the ability of representation and the robustness in recognition ---,''
IEICE Technical Report, SP93-10, pp.1--6, May 1993.
{ speech recognition, discrete HMM, tied-mixture HMM, continuous HMM, bigram-constrained HMM }

{SP93-11} K. Kita,
``LR parser applications to spoken language processing,''
IEICE Technical Report, SP93-11, pp.7--14, May 1993.
{ LR parser, predictive LR parser, gap-filling LR parser, bidirectional LR parser, LR-CRT parser, probabilistic LR parser, HMM-LR speech recognition }

{SP93-12}
``,''
IEICE Technical Report, SP93-12, pp.15--20, May 1993.
{  }

{SP93-13} N. Sugamura,
``Review of telephone network applications of speech recognition,''
IEICE Technical Report, SP93-13, pp.21--26, May 1993.
{ Speech Recognition, Telephone Network, Speech Recognition Applications, Robustness, Dialogue Design }

{SP93-14} C. Griffin and T. Matsui and S. Furui,
``Distance measures for text-independent speaker recognition based on mar model,''
IEICE Technical Report, SP93-14, pp.1--8, June 1993.
{ speaker identification, speaker verification, distance measure, text-independent, Itakura, Symmetrized Itakura, Log Likelihood ratio, Symmetrized Likelihood Ratio, Multivariate AutoRegression }

{SP93-15} S. Hayakawa and F. Itakura,
``Speaker recognition using individual information in high frequency band,''
IEICE Technical Report, SP93-15, pp.9--16, June 1993.
{ speaker identification, speaker verification, wideband, high frequency spectral region, linear combination }

{SP93-16} K. Aikawa and H. Kawahara and M. Masukata, 
``Effect of backward masking on speech recognition using dynamic cepstra,''
IEICE Technical Report, SP93-16, pp.17--24, June 1993.
{ auditory model, masking, speech recognition, hidden Markov model, dynamical feature }

{SP93-17} T. Murakami and F. Itakura,
``Investigation of effect of frequency warped parameters on speech recognition,''
IEICE Technical Report, SP93-17, pp.25--32, June 1993.
{ speech recognition, characteristic of hearing, frequency warp, cepstrum, DP matching, HMM }

{SP93-18} J. Yi and K. Miki and H. Kamanaka and T. Yazu,
``Analysis of users' characteristics in a speech dialogue system,''
IEICE Technical Report, SP93-18, pp.33--40, June 1993.
{ speech dialogue, users' characteristics, cooperativeness, eagerness, attitude }

{SP93-19} N. Kitaoka and T. Kawahara and S. Doshita,
``Continuous speech recognition based on right-to-left A* search with case structure,''
IEICE Technical Report, SP93-19, pp.41--48, June 1993.
{ continuous speech recognition, case structure, A* search, right-to-left parsing, prediction of words }

{SP93-20} A. Kai and S. Nakagawa,
``Improvements of the Japanese continuous speech recognition system - SPOJUS-SYNO - and its evaluation,''
IEICE Technical Report, SP93-20, pp.49--56, June 1993.
{ continuous speech recognition, parsing method, spontaneous speech, unknown word processing }

{SP93-21} M. Yamada and F. Itoh and K. Sakai and Y. Komori and Y. Ohora and M. Fujita,
``Information retrieval from CD-ROMs using speech conversation --- re-evaluation algorithm of unknown word in conversation ---,''
IEICE Technical Report, SP93-21, pp.57--64, June 1993.
{ spoken dialogue system, unknown word processing, syntax/word prediction and generation, full-text retrieval }

{SP93-22} I. Arima and M. Cohen,
``Continuous speech input conference room reservation demo system,''
IEICE Technical Report, SP93-22, pp.65--71, June 1993.
{  }

{SP93-23} F. Yato and T. Takezawa and S. Sagayama and J. Takami and H. Singer and N. Uratani and T. Morimoto and A. Kurematsu,
``International joint experiment toward interpreting telephony,''
IEICE Technical Report, SP93-23, pp.73-80, June 1993.
{ Interpreting Telephony, Speech Translation system, Dialogue system, User Interface }

{SP93-24} E. Geoffrois,
``Estimation of prosodic events from Japanese f0 contours,''
IEICE Technical Report, SP93-24, pp.1--8, June 1993.
{ Prosody, Pitch, Speech Recognition }

{SP93-25} S. Okawa and T. Kobayashi and K. Shirai,
``A study on automatic speech training based on mutual information,''
IEICE Technical Report, SP93-25, pp.9--16, June 1993.
{ phoneme recognition, automatic training, automatic labeling, phoneme duration, information criteria }

{SP93-26} S. Ikeda,
``Construction of phone HMM using model search method,''
IEICE Technical Report, SP93-26, pp.17--24, June 1993.
{ HMM, phone models, model search, AIC }

{SP93-27} E. Tsuboka and J. Nakahashi,
``On the multiplication type FVQ/HMM,''
IEICE Technical Report, SP93-27, pp.25--32, June 1993.
{ Fuzzy vector quantization, HMM, FVQ/HMM, Kullback-Leibler divergence }

{SP93-28} H. Nishi and M. Kitai,
``A new confirmation method using the statistical relationship between likelihood and accuracy,''
IEICE Technical Report, SP93-28, pp.33--40, June 1993.
{ speech dialogue, dialogue system, confirmation, utterance, accuracy }

{SP93-29} H. Matsuura and Y. Masai and J. Iwasaki and T. Nitta and A. Nakayama,
``Speaker independent large vocabulary word recognition based on SMQ/HMM,''
IEICE Technical Report, SP93-29, pp.41--48, June 1993.
{ speaker independent large vocabulary word recognition, equally-counted K-best learning }

{SP93-30} J. Miwa and S. Kon
``Large-vocabulary spoken word recognition using different pattern of a posteriori probability,''
IEICE Technical Report, SP93-30, pp.49--56, June 1993.
{ speech recognition, large vocabulary, phoneme, top-down processing }

{SP93-31} A. Imamura and M. Kitai,
``A frame synchronous word spotting method using a-posteriori probabilities,''
IEICE Technical Report, SP93-31, pp.57--64, June 1993.
{ Word-Spotting, Telephone speech recognition, HMM, a-posteriori probabilities }

{SP93-32} Y. Itoh and J. Kiyama and R. Oka,
``Partial sentence recognition by sentence spotting,''
IEICE Technical Report, SP93-32, pp.65--72, June 1993.
{ spontaneous speech, sentence spotting, partial sentence spotting, vector continuous DP, frame-wise }

{SP93-33} R. Isotani and S. Sagayama,
``Phrase disambiguation for speech recognition using content-word n-grams and particle n-grames,''
IEICE Technical Report, SP93-33, pp.73--78, June 1993.
{ speech recognition, stochastic language model, content words, particles, disambiguation }

{SP93-34} M. Katoh and M. Kohda,
``A study on viterbi best-fist search for isolated word recognition using discrete HMMs with bounded state durations,''
IEICE Technical Report, SP93-34, pp.79--86, June 1993.
{ speech recognition, hidden Markov model, state duration, graph search, best-first search, A* search }

{SP93-35} T. Mekata and Y. Yamada and R. Suzuki and Y. Tanaka and A. Kawano and S. Funasaka,
``Formant enhancement algorithm for a hearing aid and its evaluation,''
IEICE Technical Report, SP93-35, pp.1--8, July 1993.
{ formant enhancement, auditory filter, hearing aid, inteligibility }

{SP93-36} M. Bacchiani and K. Aikawa,
``Neural network optimization of time-frequency masking fileter,''
IEICE Technical Report, SP93-36, pp.9--16, July 1993.
{ speech recognition, optimization, neural networks, hidden Markov model }

{SP93-37} H. Hashimoto and H. Sato and K. Inoue and Y. Yamashita,
``A study on the discrimination for tone bursts with changing frequency,''
IEICE Technical Report, SP93-37, pp.17--24, July 1993.
{ difference limen, relative difference limen, point of subjective equality }

{SP93-38} H. Kawahara and J. C. Williams,
``Analysis of pitch perturbation effects by transformed auditory feedback,''
IEICE Technical Report, SP93-38, pp.25--32, July 1993.
{ Speech Perception, Pitch, Speech Production, Auditory Feedback }

{SP93-39} H. Kawahara and T. Hirai and K. Honda,
``Laryngeal muscular control under transformed auditory feedback with pitch perturbation,''
IEICE Technical Report, SP93-39, pp.33--40, July 1993.
{ Speech Perception, Pitch, Speech Production, Auditory Feedback, EMG }

{SP93-40} H. Okamoto and Y. Kakita,
``Characterization of voice fluctuation by the fractal dimension and the pseudo phase portraits analysis:  normal and pathological cases,''
IEICE Technical Report, SP93-40, pp.41--47, July 1993.
{ normal voice, pathological voice, voice fluctuation, fractal dimension, pseudo phase portrait }

{SP93-41} S. Amano,
``Reaction times to phonemic restoration of an intervocalic consonant of vowel-consonant-vowel syllables,''
IEICE Technical Report, SP93-41, pp.49--56, July 1993.
{ phoneme, perception, phonemic restoration, reaction time }

{SP93-42} K. Kato and K. Kakehi,
``Perception of phonemes and number of talkers for mixed talker utterance,''
IEICE Technical Report, SP93-42, pp.57--63, July 1993.
{ speech perception, number of talkers, phoneme feature, talker source information perceptual integration }

{SP93-43} T. Kondo and K. Kakehi,
``Effects of simultaneously presented character on auditory phoneme perception in a syllable,''
IEICE Technical Report, SP93-43, pp.65--72, July 1993.
{ audio-visual intergration, syllable perception, McGurk effect, phonological coding }

{SP93-44} K. Miseki and M. Akamine and M. Oshikiri,
``3.75kb/s ADP-CELP speech coder,''
IEICE Technical Report, SP93-44, pp.1--8, July 1993.
{ speech coder, CELP, low bit rate, adaptive density pulse, excitation }

{SP93-45} M. Oshikiri and M. Akamine and K. Miseki,
``LPC coefficients quantization method using hybrid PARCOR-LSP vector quantization,''
IEICE Technical Report, SP93-45, pp.9--14, July 1993.
{ speech coding, vector quantization, LPC coefficients, PARCOR coefficients, LSP coefficients, mobile communication }

{SP93-46} K. Honda,
``Morphology of speech organs and its origin,''
IEICE Technical Report, SP93-46, pp.15--22, July 1993.
{ speech Production, Evolutionary Morphology, Larynx, Articulators, Brain Function }

{SP93-47} M. Akagi,
``Speech perception and hearing model,''
IEICE Technical Report, SP93-47, pp.23--30, July 1993.
{ Speech Chain, Auditory System, Psychoacoustics, Speech Perception, Hearing Model }

{SP93-48} Y. Yamada,
``Sensory aids for the hearing impaired,''
IEICE Technical Report, SP93-48, pp.31--38, July 1993.
{  }

{SP93-49} K. Saito and T. Kobayashi and K. Shirai,
``Speaker individuality extraction for neural network based speaker adaptation,''
IEICE Technical Report, SP93-49, pp.1--7, Aug. 1993.
{ Speech recognition, Speaker adaptation, Neural networks, HMM }

{SP93-50} J. Takami and S. Sagayama,
``A speaker adaptation technique for Hidden Markov Networks,''
IEICE Technical Report, SP93-50, pp.9--16, Aug. 1993.
{ speech recognition, speaker adaptation, phoneme-context-dependent HMM, hidden Markov network, vector field smoothing, standard speaker pre-selection }

{SP93-51} S. Homma,
``Study of training section for concatenated training of phoneme HMM,''
IEICE Technical Report, SP93-51, pp.17--24, Aug. 1993.
{ restriction of training section, phoneme HMM, concatenated training }

{SP93-52} T. Takara and S. Urasaki,
``Connected spoken word recognition using the discrete-state Markov model for the feature vector,''
IEICE Technical Report, SP93-52, pp.25--32, Aug. 1993.
{ Speech Recognition, Markov Model, State Transition Probability, Discrete State, Continuous State, Vector Quantization }

{SP93-53} Y. Ohsaka and S. Makino and T. Sone,
``A spoken word recognition system taking account of speaking rate of input speech,''
IEICE Technical Report, SP93-53, pp.33--40, Aug. 1993.
{ spoken word recognition, speaking rate, duration of phoneme, gradient of phoneme similarity vector }

{SP93-54} T. Otsuki and A. Ito and S. Makino and T. Otomo,
``The performance evaluation on sentence recognition system using a finite state automaton --- the relationship between word recognition score and sentence recognition score ---,''
IEICE Technical Report, SP93-54, pp.41--48, Aug. 1993.
{ sentence recognition, word recognition, performance prediction, finite state automaton }

{SP93-55} H. Sakoe and Y. Katayama and K. Mitsunaga,
``Efficiency improvements of CYK parsing algorithm for continuos speech recognition,''
IEICE Technical Report, SP93-55, pp.49--55, Aug. 1993.
{ Continuous speech recognition, Syntactic parsing, CYK method, Beam search, Beam driven parsing }

{SP93-56} H. Yu and Y. H. Oh and Y. Yamashita and R. Mizoguchi,
``Expert system for continuous speech recognition with non-uniform recognition unit,''
IEICE Technical Report, SP93-56, pp.57--??, Aug. 1993.
{ continuous speech recognition, expert system, non-uniform unit, recognition unit, redundancy of speech }

{SP93-57} S. Nakagawa and T. Seino,
``Spoken Language Identification by Ergodic HMMs State Sequence,''
IEICE Technical Report, SP93-57, pp.1--8, Aug. 1993.
{ language identification, ergodic HMM, phonotactics, optimal state sequence }

{SP93-58} K. Aizawa and C. Furuichi and S. Imai,
``Automatic phoneme segmentation and broad classification of phonetically balanced English speech sentences,''
IEICE Technical Report, SP93-58, pp.9--14, Aug. 1993.
{ reading-rate English speech utterance, phonemic segmentation, broad category labeling, unbiased estimator of log spectrum }

{SP93-57} M. Tanaka and Y. Nomura and Y. Yamashita R. Mizoguchi,
``Automatic generation of speech synthesis rules for accent components based on decision tree,''
IEICE Technical Report, SP93-59, pp.15--22, Aug. 1993.
{ speech synthesis, decision tree, prosody control, accent component, automatic rule generation }

{SP93-60} M. Katoh and M. Komura,
``The rhythm rules in Japanese based on the cegv --- conclusion and problems ---,''
IEICE Technical Report, SP93-60, pp.23--30, Aug. 1993.
{ Speech synthesis by rule, Rhythm, Duration, Isochrony, Center of Energy gravity of vowels (CEGV), DG }

{SP93-61} Y. Yoshida and M. Abe,
``Reconstruction of wideband speech from narrowband speech by codebook mapping,''
IEICE Technical Report, SP93-61, pp.31--38, Aug. 1993.
{ speech enhancement, code book, waveform, LPC analysis/synthesis }

{SP93-62} K. Tokuda and H. Matsumura and T. Kobayashi and S. Imai,
``Speech coding system based on adaptive mel-cepstral analysis and its evaluation,''
IEICE Technical Report, SP93-62, pp.39--47, Aug. 1993.
{ mel-cepstrum, speech coding, adaptive mel-cepstral analysis, characteristics of auditory sensation }

{SP93-63} S. Imaizumi and H. Abdoerrachman and H. Hirose and S. Niimi and Y. Shimura and H. Saita,
``Acoustic evaluation of vocal controllability --- characteristics of slow fluctuations ---,''
IEICE Technical Report, SP93-63, pp.47--52, Aug. 1993.
{ Vocal variability, Vocal controllability, Dysphonia, Vibrato, Tremor }

{SP93-65} K. Ando and T. Kitamura,
``Synthesis speech of mel-cepstrum by using dynamic feature of speech,''
IEICE Technical Report, SP93-65, pp.9--16, Oct. 1993.
{ speech synthesis, dynamic feature, dynamic mel-ceptsrum, two-dimensional mel-cepstrum, masking }

{SP93-66} H. Sekiya and M. Kohata and T. Takagi,
``A coding method for time variant patterns of speech signals with recurrent neural networks,''
IEICE Technical Report, SP93-66, pp.17--23, Oct. 1993.
{ Very low bit rate coding, Recurrent neural network, Segment quantization, Time variant pattern }

{SP93-67} S. Imaizumi and H. Saita and H. Abdoerrachman and H. Hirose and S. Niimi and Y. Shimura
``Acoustic evaluation of vocal controllability --- characteristics of vocal registers and vibrato ---,''
IEICE Technical Report, SP93-67, pp.25--29, Oct. 1993.
{ Vocal controllability, Vibrato, Register, Chest Register, Mid register }

{SP93-68} S. Kumada and F. Itakura,
``Vector quantization of transform coefficients with multi-codebook in audio transform coding,''
IEICE Technical Report, SP93-68, pp.31--38, Oct. 1993.
{ audio adaptive transform coding, vector quantization, multi-codebook, bit allocation }

{SP93-69} S. Hotani and F. Itakura,
``Tree searched multi-stage vector quantization of LSP parameters,''
IEICE Technical Report, SP93-69, pp.39--46, Oct. 1993.
{ Low Bit Rate Speech Coding, Tree search, Multi-Stage VQ, LSP parameter, Computational Complexity }

{SP93-70} T. Sasaki and T. Kitamura and A. Iwata,
``Speaker-independent 212 word recognition methods using two-dimensional mel-cepstrum,''
IEICE Technical Report, SP93-70, pp.47--54, Oct. 1993.
{ word recognition, dynamic spectral features of speech, neural network, CombNET-II }

{SP93-71} S. Hayakawa and F. Itakura,
``Speaker recognition using information in the higher frequency band --- analysis by selective linear prediction ---,''
IEICE Technical Report, SP93-71, pp.55--61, Oct. 1993.
{ speaker recognition, selective LP, wideband, high frequency spectral region, time averaged speech spectrum }

{SP93-72} H. Matsumoto and H. Oda and T. Furukawa and Y. Sato,
``A proposal of new algorithm for blind deconvolution,''
IEICE Technical Report, SP93-72, pp.63--72, Oct. 1993.
{ Adaptive Filter, Blind Deconvolution, Convergence Rate, Waveform Distortion }

{SP93-73} H. Minami and M. Tanimoto,
``Orthogonal and linear-phase subband filters,''
IEICE Technical Report, SP93-73, pp.73--78, Oct. 1993.
{ picture coding, subband coding, subband filters, orthogonality, linear-phase, coding gain }

{SP93-74} T. Yamaguchi and M. Nakagawa,
``Fractal property of vocal sounds and its evaluation method,''
IEICE Technical Report, SP93-74, pp.79--86, Oct. 1993.
{ Vocal Sounds, Fractal Dimensions, Kolmogorov Dimension, Moment Exponent }

{SP93-75} Y. Lin and T. Chiang and K. Su,
``An integrated character-synchronous score function used in a Chinese phonetic typewriter,''
IEICE Technical Report, SP93-75, pp.1--6, Nov. 1993.
{ Phonetic Typewriter, Dictation Machine, Speech Recognition, Language Model }

{SP93-76} N. Ward,
``On the role of syntax in speech understanding,''
IEICE Technical Report, SP93-76, pp.7--12, Nov. 1993.
{ speech understanding, syntax, semantics, parsing, parallel processing, grammatical constructions, numeric, evidential, integrated, feedback }

{SP93-77} H. Wang and C. Hwu and K. Chu,
``A mandarin speech recognition system based on two dimensional mel-cepstrum and language models,''
IEICE Technical Report, SP93-77, pp.13--18, Nov. 1993.
{ Mandarin speech, two-dimensional mel-cepstrum, speech recognition, language models }

{SP93-78} Y. Kobayashi and  Y. Niimi,
``Factors having influence on user's utterances in spoken dialog,''
IEICE Technical Report, SP93-78, pp.19--22, Nov. 1993.
{  }

{SP93-79} S. Chang and V. Hong and S. Chen,
``Segment-based speech recognition for continuous mandarin speech,''
IEICE Technical Report, SP93-79, pp.23--28, Nov. 1993.
{  }

{SP93-80} S. Kajita and F. Itakura,
``Speech recognition using subband-autocorrelation spectrum,''
IEICE Technical Report, SP93-80, pp.29--34, Nov. 1993.
{ recognition, Subband-Autocorrelation Spectrum }

{SP93-81} Alain Biem and Shigeru Katagiri
``Designing filter bank by the discriminative feature extraction method,''
IEICE Technical Report, SP93-81, pp.35--40, Nov. 1993.
{  }

{SP93-82} S. Kitazawa and D. Uemura,
``Detection of voiceless stops using features in speech waveform,''
IEICE Technical Report, SP93-82, pp.41--46, Nov. 1993.
{ waveform parameter, feature explosion, discriminant analysis }

{SP93-83} M. Kobayashi and M. Sakamoto,
``Wavelet analysis of speech events,''
IEICE Technical Report, SP93-83, pp.47--52, Nov. 1993.
{ wavelets, speech events, stop consonants, bursts, fricatives, formant frequency shift }

{SP93-84} N. Miki and K. Takemura and N. Nagai,
``Spectrum analysis for vowels as non-stational production,''
IEICE Technical Report, SP93-84, pp.53--58, Nov. 1993.
{  }

{SP93-85} 
``Neuromuscular specification of the feature 'tensity' with reference to English and Korean,''
IEICE Technical Report, SP93-85, pp.59--64, Nov. 1993.
{ tense-lax distinction, EMG }

{SP93-86} T. Morohashi AND  T. Shimamura and J. Suzuki,
``Improvement in quality of voice picked up from outer skin of larynx-effects of high frequency emphasis,''
IEICE Technical Report, SP93-86, pp.65--70, Nov. 1993.
{ throat microphone, voice picked up from outer skin of larynx, speech intelligibility, speech naturalness }

{SP93-87} Masahiro Nakagawa and Tatsuya Yamaguchi
``A study of fractal properties of vocal sounds,''
IEICE Technical Report, SP93-87, pp.71--76, Nov. 1993.
{ Fractals, Vocal Sounds, Critical Exponent Method }

{SP93-88} N. Kunieda and T. Shimamura and J. Suzuki and H. Yashima
``Evaluation of SPAD(speech processing system by use of auto-difference function)for noise reduction,''
IEICE Technical Report, SP93-88, pp.77--82, Nov. 1993.
{ SPAD, SPAC AMDF, difference function, noise reduction, improvement of SNR }

{SP93-89} S. Kim and M. Zhi and U. Choi and H. Hahn,
``Application of TD-PSOLA technique to Korean T-t-S conversion,''
IEICE Technical Report, SP93-89, pp.1--6, Nov. 1993.
{ T-t-S, TD-PSOLA, synthesis units }

{SP93-90} S. Iwama and T. Shimamura and J. Suzuki,
``Transmission of pitch information by delta modulation in analysis synthesis telephony,''
IEICE Technical Report, SP93-90, pp.7--12, Nov. 1993.
{ pitch transmission, pitch extraction, delta modulation, pitch excited vocoder }

{SP93-91} T. Mochida and T. Kobayashi and K. Shirai,
``Speech synthesis of Japanese sentences using large waveform data-base,''
IEICE Technical Report, SP93-91, pp.13--18, Nov. 1993.
{ sentence speech synthesis, sentence data-base, phrase dependency structure }

{SP93-92} S. Makino and Y. Osaka,
``A spoken word recognition system with adaptability to speaking rate of input speech,''
IEICE Technical Report, SP93-92, pp.19--24, Nov. 1993.
{ spoken word recognition, speaking rate, duration of phoneme }

{SP93-93} A. Ito and S. Makino,
``A fast word pre-selection based on speech fragments for continuous speech recognition,''
IEICE Technical Report, SP93-93, pp.25--30, Nov. 1993.
{ word pre-selection, word spotting, confusion matrix }

{SP93-94} E. McDermott and S. Katagiri,
``Prototype-based minimum error training for continuous speech recognition,''
IEICE Technical Report, SP93-94, pp.31--36, Nov. 1993.
{  }

{SP93-95} S. Takahashi and Y. Minami and K. Shikano,
``Improving phoneme HMMs for large-vocabulary spontaneous speech recognition,''
IEICE Technical Report, SP93-95, pp.37--42, Nov. 1993.
{ continuous HMM, context-independent model, context-dependent model, spontaneous speech recognition, large-vocabulary recognition }

{SP93-96} S. Hayakawa and F. Itakura,
``Text-dependent speaker recognition using the higher frequency band,''
IEICE Technical Report, SP93-96, pp.43--48, Nov. 1993.
{ speaker recognition, wideband, time averaged spectrum, linear combination }

{SP93-97} T. Nakatani and T. Kawabata and H. G. Okuno,
``Speech stream segregation by multi-agent system,''
IEICE Technical Report, SP93-97, pp.49--54, Nov. 1993.
{ auditory stream segregation, multi-agent system, emergent computation, harmonics }

{SP93-98} K. Tanaka and S. Itahashi,
``Feature extraction of spoken Japanese dialects utilizing fundamental frequency contour,''
IEICE Technical Report, SP93-98, pp.55--60, Nov. 1993.
{ fundamental frequency, principal component analysis, discriminant analysis }

{SP93-99} J. Zhou and S. Itahasi,
``Feature extraction for spoken language discrimination using the speech fundamental frequency,''
IEICE Technical Report, SP93-99, pp.61--66, Nov. 1993.
{ fundamental frequency, polygonal line, feature extraction }

{SP93-100} K. Arai and M. Kitai and S. Nakajima and H. Nishi,
``Effects of question style on speech dialog,''
IEICE Technical Report, SP93-100, pp.67--72, Nov. 1993.
{ Speech Dialog System, Prompt, the Wizard Method, Word Restriction, Message Completion and Acoustic Change }

{SP93-101} H. Nishi and M. Kitai,
``A new confirmation method using the statistical relationship between likelihood and accuracy,''
IEICE Technical Report, SP93-101, pp.73--78, Nov. 1993.
{ dialog, strategy, confirmation, re-utter, accuracy }

{SP93-102} S. Nakajima and H. Tsukada,
``Utterance pattern characteristics in task-oriented dialogues,''
IEICE Technical Report, SP93-102, pp.79--84, Nov. 1993.
{  }

{SP93-103} Min Zhou and Seiichi Nakagawa
``Comparision of SCFG estimation by the inside-outside and error corrective algorithm,''
IEICE Technical Report, SP93-103, pp.85--90, Nov. 1993.
{ stochastic context-free grammar, inside-outside algorithm, error corrective training speech recognition}

{SP93-104} Y. Tsurumi and S. Nakagawa,
``Unsupervised speaker adaptation for continuous parameter HMM by maximum a posteriori probability estimation,''
IEICE Technical Report, SP93-104, pp.1--8, Dec. 1993.
{ MAP estimation, sequencial concatenation training, HMM, unsupervised speaker adaptation }

{SP93-105} N. Minematsu and K. Hirose,
``Speech recognition based on the dynamic control of acoustic features,''
IEICE Technical Report, SP93-105, pp.9--16, Dec. 1993.
{ acoustic features, continuous HMM, phoneme recognition, feature sub-space, indirect representation of speech }

{SP93-106} J. Takahashi and M. Tobita and H. Nagashima and N. Sugamura,
``Performance evaluation and analysis of telephone speech recognition using simulated transmission characteristics,''
IEICE Technical Report, SP93-106, pp.17--24, Dec. 1993.
{ telephone, speech recognition, transmission, frequency, simulated, HMM }

{SP93-107} H. Imose and H. Matsumoto,
``Improved frequency-weighted HMM for noisy speech recognition,''
IEICE Technical Report, SP93-107, pp.25--32, Dec. 1993.
{ Speech recognition, Frequency-Weighting, Noise Robustness, Hidden Markov model }

{SP93-108} M. Katoh and M. Kohda,
``A study on viterbi best-first search for isolated word recognition using duration controlled HMMs,''
IEICE Technical Report, SP93-108, pp.33--40, Dec. 1993.
{ speech recognition, hidden Markov model, duration control, graph search, best-first search, A*search }

{SP93-109} S. Monzen and M. Kohda,
``A study on viterbi best-first search for phrase recognition using HMM-LR method,''
IEICE Technical Report, SP93-109, pp.41--48, Dec. 1993.
{ speech recognition, hidden Markov model, viterbi algorithm, generalized LR parsing, graph search, best-first search }

{SP93-110} T. Kosaka and S. Matsunaga and S. Sagayama,
``Tree-structured speaker clustering for speaker adaptation,''
IEICE Technical Report, SP93-110, pp.49--54, Dec. 1993.
{ speech recognition, speaker adaptation, speaker clustering, hidden Markov network }

{SP93-111} N. Yanagiya and and H. Takahashi and E. Tomita,
``Continuous speech recognition using recurrent,''
IEICE Technical Report, SP93-111, pp.55--62, Dec. 1993.
{ Recurrent Neural Network, Continuous Speech Recognition, Phoneme Spotting, FM, AM Neuron }

{SP93-112} M. Pundsack and T. Nitta,
``Comparison of context dependent sub-word HMMs for Japanese,''
IEICE Technical Report, SP93-112, pp.63--70, Dec. 1993.
{ Speech recognition, HMM, subword-model }

{SP93-113} H. Ohwaki and H. Singer and J. Takami and A. Kurematsu,
``Phonetic typewriter using phonotactic constraints,''
IEICE Technical Report, SP93-113, pp.71--78, Dec. 1993.
{ speech recognition, phonetic typewriter, phonotactics, speaking-style adaptation }

{SP93-114} H. Kanazawa and S. Seto and H. Shinchi and Y. Takebayashi,
``Dialogue data collection and evaluation environment for spontaneous speech dialogue system TOSBURG II,''
IEICE Technical Report, SP93-114, pp.1--6, Dec. 1993.
{ Spontaneous speech dialogue, Speech response cancellation, Data collection, Evaluation }

{SP93-115} T. Kawabata,
``Topic focusing mechanism for speech recognition, based on probabilistic grammar and topic Markov Model,''
IEICE Technical Report, SP93-115, pp.7--13, Dec. 1993.
{ Natural language, Topic focusing, Speech understanding, JUNO }

{SP93-116} N. Kitaoka and T. Kawahara and S. Doshita,
``Phrase spotting for spontaneous speech understanding,''
IEICE Technical Report, SP93-116, pp.15--22, Dec. 1993.
{ spontaneous speech understanding, heuristic model, phrase spotting, semantic-driven search }

{SP93-117} T. Hitaka,
``Maximum likelihood estimate of parameters for stochastic grammars,''
IEICE Technical Report, SP93-117, pp.23--28, Dec. 1993.
{ Parameter Estimate, Maximum Likelihood Estimate, Stochastic Grammar }

{SP93-118} T. Watanabe and S. Hayashi,
``Speech quality assessment using objective measure based on loudness model,''
IEICE Technical Report, SP93-118, pp.1--8, Jan. 1994.
{ speech coding, speech quality assessment, loudness, critical band filter, masking }

{SP93-119} T. Asano and H. Kamata and Y. Ishida,
``A method for estimation of pitch frequency based on detection of nonstationary on voiced speech,''
IEICE Technical Report, SP93-119, pp.9--16, Jan. 1994.
{  }

{SP93-120} H. Oka and H. Kamata and Y. Ishida,
``A proposition of the transfer function for the voiceed speech,''
IEICE Technical Report, SP93-120, pp.17--24, Jan. 1994.
{  }

{SP93-121} K. Itoh and S. Nakajima and T. Hirokawa,
``Waveform synthesis unit generation based on clustering technique for speech synthesis,''
IEICE Technical Report, SP93-121, pp.25--30, Jan. 1994.
{ Text-to-speech, Waveform synthesis, Synthesis units, COC clustering }

{SP93-122} H. Konno and K. Hirose,
``Detection of syntactic boundaries using prosodic information,''
IEICE Technical Report, SP93-122, pp.31--38, Jan. 1994.
{ prosodic information, syntactic boundaries, continuous speech recognition, macroscopic and microscopic features, F0, contour model, partial AbS }

{SP93-123} X. Hu and K. Hirose,
``Lexical tone recognition of Chinese using HMM,''
IEICE Technical Report, SP93-123, pp.39--45, Jan. 1994.
{ tone recognition, Chinese monosyllables, fundamental frequency, macroscopic and microscopic features, speaker adaptation, HMM }

{SP93-124} M. Matsumura and S. Okawa and T. Kobayashi and K. Shirai,
``Phoneme recognition with lip-reading,''
IEICE Technical Report, SP93-124, pp.47--54, Jan. 1994.
{ information of human's faces, lip-reading, phoneme recognition, mutual information, hierarchical vector quantization }

{SP93-125} M. Tamoto and T. Kawabata,
``Aspects of clustering based on bigram occurrency,''
IEICE Technical Report, SP93-125, pp.55--62, Jan. 1994.
{ Speech Understanding, Natural Language Processing, Self Organization, JUNO }

{SP93-126} N. Nukaga and M. Araki and T. Kawahara and S. Doshita,
``Incremental speech understanding using hierarchically structured semantic network,''
IEICE Technical Report, SP93-126, pp.63--70, Jan. 1994.
{ Incremental speech understanding, Spontaneous speech, Semantic driven parsing, Hierarchically structured semantic network }

{SP93-127} J. Murakami and S. Matunaga,
``A spontaneous speech recognition algorithm with pause and filled pause procedure,''
IEICE Technical Report, SP93-127, pp.71--78, Jan. 1994.
{ Word trigram model, one-pass DP, spontaneous speech recognition, filled pauses }

{SP93-128} O. Yoshioka and Y. Minami and K. Shikano,
``Development and evaluation of a multi-modal dialogue system for telephone directory assistance,''
IEICE Technical Report, SP93-128, pp.1--8, Jan. 1994.
{ Multi-modal, Dialogue system, System evaluation }

{SP93-129} M. Ohta and Y. Yamashita and R. Mizoguchi,
``Decision of topic for understanding of spoken dialog,''
IEICE Technical Report, SP93-129, pp.9--16, Jan. 1994.
{ Topic, Linguistic expression, Spoken dialog, Utterance prediction, Topic transition model }

{SP93-130} M. Yamamoto and M. Takagi and S. Nakagawa,
``A menu guided spoken dialog system and its evaluation,''
IEICE Technical Report, SP93-130, pp.17--24, Jan. 1994.
{ Menu-based natural language understanding, Dialog system, Speech recognition, Natural language processing }

{SP93-131} S. Okawa and C. Windheuser and F. Bimbot and K. Shirai,
``Phonetic feature recognition with time delay neural network and the evaluation by mutual information,''
IEICE Technical Report, SP93-131, pp.25--32, Jan. 1994.
{ word recognition, time delay neural network, phonetic features, mutual information, hybrid systems }

{SP93-132} K. Doi and Y. Ariki,
``Phoneme recognition using concatenated HMM training by restricting training section,''
IEICE Technical Report, SP93-132, pp.33--38, Jan. 1994.
{ concatenated HMM training, restricting training section, phoneme recognition, word recognition, evaluation without hand-labels, evaluation using hand-labels }

{SP93-133} T. Matsuoka and C. Lee,
``On-line speaker adaptation using maximum a posteriori estimation,''
IEICE Technical Report, SP93-133, pp.39--46, Jan. 1994.
{ speaker adaptation, hidden Markov model, maximum a posteriori estimation(MAP) }

{SP93-134} Y. Miyazawa and J. Takami and S. Sagayama and S. Matsunaga,
``Unsupervised speaker adaptation method using all-phoneme ergodic Hidden Markov network,''
IEICE Technical Report, SP93-134, pp.47--52, Jan. 1994.
{ speech recognition, unsupervised speaker adaptation, all-phoneme ergodic hidden Markov network, context-dependent phoneme phoneme bigram }

{SP93-135} M. Dantsuji,
``On phonetic transcriptions of the Japanese language,''
IEICE Technical Report, SP93-135, pp.53--60, Jan. 1994.
{ IPA, phoneme, broad transcription, narrow transcription, phonetics }

{SP93-136} S. Kinsui,
``A discourse management analysis of spoken dialogue,''
IEICE Technical Report, SP93-136, pp.61--68, Jan. 1994.
{ discourse markers, interjections, sentence-final particles, mutual knowledge }

{SP93-137} M. Abe,
``Speech modification methods for fundamental frequency, duration and speaker individuality,''
IEICE Technical Report, SP93-137, pp.69--75, Jan. 1994.
{ speech analysis-synthesis, F0, duration, voice conversion }

{SP93-138} K. Tanaka and K. Nakata,
``Recursive and adaptive coding of speech,''
IEICE Technical Report, SP93-138, pp.1--8, Feb. 1994.
{ speech coding, recursive and adaptive prediction, Kalman filter }

{SP93-139} H. Ohmuro and K. Mano and T. Moriya,
``A study of variable bit-rate PSI-CELP speech coding,''
IEICE Technical Report, SP93-139, pp.9--16, Feb. 1994.
{ speech coding, variable bit-rate, PSI-CELP, multi-mode, low bit-rate }

{SP93-140} S. Sasaki and A. Kataoka and T. Moriya,
``Studies on 7khz wideband speech coding --- 16kbit/s 10ms-frame CELP ---,''
IEICE Technical Report, SP93-140, pp.17--22, Feb. 1994.
{ wideband speech, G.722, CELP, conjugate structure, pre-selection, MA prediction }

{SP93-141} Y. Arai and T. Minowa and H. Yoshida and H. Nishimura and H. Kamata and T. Honda,
``A study on the high quality voice editing system,''
IEICE Technical Report, SP93-141, pp.23--30, Feb. 1994.
{ Speech editing, Waveform superposing method, Waveform extraction window, Waveform concatination, Prosody modification, Quality evaluation }

{SP93-142} H. Mizuno and M. Abe,
``Voice conversion based on piecewise linear conversion rules of formant frequency and spectrum tilt,''
IEICE Technical Report, SP93-142, pp.31--36, Feb. 1994.
{ Voice Conversion, Formant Frequency, Spectrum Tilt, Listening Test }

{SP93-143} T. Yoshimura and S. Hayamizu and K. Tanaka,
``Evaluation and study of vocabulary-independent mora Hidden Markov Models on word accent pattern identification,''
IEICE Technical Report, SP93-143, pp.37--44, Feb. 1994.
{ prosodic information, accent patterns, mora HMMs }

{SP93-144} S. Imaizumi and S. Hamaguchi and T. Deguchi,
``Vowel devoiceing characteristics of Japanese dialogue directed to hearing-impaired/normal-hearing children by teachers,''
IEICE Technical Report, SP93-144, pp.1--8, March 1994.
{ Hearing-impaired, Speech perception, Devoiceing, Reorganization, Articulation }

{SP93-145} Y. Qun and S. Niimi,
``Tonal coarticulation in quadrisyllabic words of standard Chinese,''
IEICE Technical Report, SP93-145, pp.9--16, March 1994.
{ Tonal language, Coarticulation, Electromygraphic, Laryngeal adjustment }

{SP93-146} T. Kitamura and M. Akagi,
``Speaker individualities in speech spectral envelopes,''
IEICE Technical Report, SP93-146, pp.17--24, March 1994.
{ spectral envelope, speaker individuality, vowel characteristic, frequency band }

{SP93-147} Y. Fukuda,
``An electronic dictionary of Japanese sign language:  design of system and organization of database,''
IEICE Technical Report, SP93-147, pp.25--30, March 1994.
{ sign Language, Japanese Sign Language, electronic dictionary, database, laser-disc }

{SP93-148} K. Mori and S. Imaizumi and K. Miyagishima and K. Yoneda and S. Kiritani,
``Auditory evoked magnetic fields during pitch and phoneme judgement of synthetic CV moras,''
IEICE Technical Report, SP93-148, pp.31--38, March 1994.
{ Broca area, M100, M200, Synthetic speech, Auditory evoked magnetic encephalograpy }

{SP93-149} T. Kaburagi and M. Honda,
``A trajectory formation model of articulatory movements based on the motor tasks of vocal tract shapes,''
IEICE Technical Report, SP93-149, pp.39--46, March 1994.
{ articulatory movement, vocal tract shape, motor task, coordination, cost function }

{SP93-150} J. Takita and T. Iijima and M. Akagi,
``Quasi-auditory filter model by natural observation method --- normal natural observation filter ---,''
IEICE Technical Report, SP93-150, pp.47--54, March 1994.
{ Natural Observation Method, Normal Natural Observation Filter, auditory filter }

{SP93-151} R. Nakagawa and M. Akagi,
``A study on modeling of contexual effects for vowel perception,''
IEICE Technical Report, SP93-151, pp.55--62, March 1994.
{ vowel perception, contexual effects, Target, Anchor, Overshoot, Undershoot }

{SP93-152} A. Kuwahara and M. Akagi,
``Speech coding using spectrum interpolation in time domain,''
IEICE Technical Report, SP93-152, pp.63--70, March 1994.
{ cepstrum analysis and synthesis of speech, log magnitude approximation filter, spectral interpolation, inverse function of intergrated spectrum }

{SP93-153} I. Mikuni and M. Tateno and K. Makino and I. Yamada,
``Hearing aid, which is designed to compensate the internal spectrum of the sensory neural hearing impaired subject, and its evaluation,''
IEICE Technical Report, SP93-153, pp.71--78, March 1994.
{ hearing aid, sensory neural hearing imparement, internal spectrum, hearing test, spectrum peak enhancement, adaptive filter }