SP Subject Index 2000

{SP2000-1} K. Imoto, M. Dantsuji, T. Kawahara,
``Modeling of perception of English sentence stress for CALL system,''
IEICE, SP2000-1, pp.1--8, May 2000.
{ sentence stress, CALL, prosodic pattern, fundamental frequency, power, vowel duration }

{SP2000-2} N. Minematsu and S. Nakagawa,
``Automatic Estimation of Pronunciation Habits Using a Single Word Utterance Based upon a Stressed Syllable Detection Technique,''
IEICE, SP2000-2, pp.9--16, May 2000.
{ Word stress, Pronunciation habit, Stressed syllable detection, HMM, English CAI, Triangular representation }

{SP2000-3} C. Ishi, K. Fujimoto and K. Hirose,
``Identification of Japanese "tokushuhaku" regarding the influence of speaking rate,''
IEICE, SP2000-3, pp.17--24, May 2000.
{ Speaking rate, Japanese "tokushuhaku", Segmental duration, CALL systems }

{SP2000-4} T. Minowa, R. Mochizuki, H. Nishimura and T. Kamai,
``Prosody Control Based on A Vector of Pitch Interval and Power for Waveform Concatenation Synthesis,''
IEICE, SP2000-4, pp.25--31, May 2000.
{ Waveform concatenation synthesis, Prosody, Naturalness, Pitch control, Power control }

{SP2000-5} T. Shimizu, H. Yoshimura, M. Kimoto, T. Namiki, N. Isu and H. Sugata,
``Relation between the phonemic environmental resemblance score and the LSP distance on LSP VCV Method,''
IEICE, SP2000-5, pp.33--40, May 2000.
{ Speech synthesis by rule, Vector quantization, LSP analysis, VCV unit }

{SP2000-6} T. Koyama, J. Takahashi and T. Nakamura,
``Waveform dictionary for waveform synthesis based on VECV synthesis unit,''
IEICE, SP2000-6, pp.41--48, May 2000.
{ Speech synthesis, Waveform synthesis, Synthesis unit, PSOLA, Quality evaluation, Word intelligibility }

{SP2000-7} T. Toda, J. Lu, S. Nakamura and K. Shikano,
``Voice conversion algorithm based on Gaussian mixture model applied to STRAIGHT,''
IEICE, SP2000-7, pp.49--54, May 2000.
{ voice converion, Gaussian mixture model, codebook mapping, STRAIGHT }

{SP2000-8} T. Marumoto and N. Campbell,
``Labelling Voice-quality in a Speech Database for Synthesis,''
IEICE, SP2000-8, pp.55--61, May 2000.
{ speech synthesis, voice quality, auto-labelling, speaking style, HMM modeling }

{SP2000-9} Y. Shiraki,
``Differential operator and multiple structure of speech spectra,''
IEICE, SP2000-9, pp.63--68, May 2000.
{ rational function, pole, complex analysis in several variables, the problem of P. Cousin, global analysis, going over to larger spaces, multiple structure of speech spectra, pseudo-convex, L2 space }

{SP2000-10} T. Kato, S. Kuroiwa, T. Shimizu and N. Higuchi,
``Speaker Clustering using Telephone Speech Database of a Large Number of Speakers,''
IEICE, SP2000-10, pp.1--8, June 2000.
{ speech recognition, telephone speech, acoustic modeling, speaker adaptation, clustering }

{SP2000-11} S. Sato, H. Segi, K. Onoe, T. Imai, H. Tanaka and A. Ando,
``Selective training of speaker-clustered HMMs,''
IEICE, SP2000-11, pp.9--15, June 2000.
{ Broadcast news, subtitling, speech recognition, acoustic model, HMM, clustering, GMM }

{SP2000-12} M. Nishida and Y. Ariki,
``Speaker Recognition by Projection to Speaker Complementary Space,''
IEICE, SP2000-12, pp.17--22, June 2000.
{ speaker verification, subspace method, speaker eigenspace, speaker complementary space, discriminant analysis }

{SP2000-13} M. Suzuki and S. Makino,
``Speaker Independent Phoneme Recognition Based on A Speaker Cluster Estimated From Input Speech,''
IEICE, SP2000-13, pp.23--30, June 2000.
{ Speaker-independent speech recognition, Distance between speakers, SSS-free, HMnet }

{SP2000-14} N. Murai and T. Kobayashi,
``Dictation of Multiparty Conversation Using MLLR Speaker Adaptation and Statistical Turn Taking Model,''
IEICE, SP2000-14, pp.31--38, June 2000.
{ multiparty conversation, stochastic turn taking model, speaker individuality, GMM, MLLR }

{SP2000-15} Y. Kato, T. Akae, M. Nakai, H. Shimodaira and S. Sagayama,
``Enhancing the Jacobian Adaptation of HMM for Noisy Speech Recognition,''
IEICE, SP2000-15, pp.39--46, June 2000.
{ robust speech recognition, noisy speech, adaptation, HMM, channel distortion }

{SP2000-16} T. Narita and M. Sugiyama,
``Algorithms for Fast Retrieval of Music,''
IEICE, SP2000-16, pp.1--8, June 2000.
{ Music retrieval, Vector quantization, Music database, Active search }

{SP2000-17} T. Uchida, M. Yamashita and M. Sugiyama,
``Voice/Music Segmentation using Cepstrum Flux,''
IEICE, SP2000-17, pp.9--16, June 2000.
{ Segmentation, Acoustic segment, Voice and Music, Searching }

{SP2000-18} K. Mori, K. Yamamoto and S. Nakagawa,
``Online Speaker Change Detection and Speaker Clustering Using VQ Distortion Measure,''
IEICE, SP2000-18, pp.17--24, June 2000.
{ Speaker Change Detection, Speaker Clustering, News Speech, Speech Recognition }

{SP2000-19} M. Katoh, J. kanou, A. Ito  and M. Kohda,
``A Study on MLLR-Based Speaker Models Using for Speaker Verification,''
IEICE, SP2000-19, pp.25--32, June 2000.
{ speaker verification, text prompted, HMM, maximum likelihood linear regression, MDL criterion }

{SP2000-20} D. Ishioka, M. Suzuki and S. Makino,
``A study for the Effectiveness of Line Spectrum Pair on Phoneme Recognition,''
IEICE, SP2000-20, pp.33--38, June 2000.
{ LSP, Feature vector, Phoneme recognition }

{SP2000-21} F. Sugaya, T. Takezawa, A. Yokoo and S. Yamamoto,
``Evaluation on ATR-MATRIX speech translation system,''
IEICE, SP2000-21, pp.39--45, June 2000.
{ speech translation system, ATR-MATRIX, evaluation, dialog tests, method of paired comparisons, TOEIC }

{SP2000-22} Y. Tone, A. Ogihara and H. Shibata,
``HMM Based Emotion Discrimination for Speech Dialogue System,''
IEICE, SP2000-22, pp.47--53, June 2000.
{ emotion discrimination, speech dialogue system, HMM, human interface }

{SP2000-23} S. Lee and K. Hirose,
``Efficient control of LVCSR search space using prosodic information with considerations on the interaction of language model,''
IEICE, SP2000-23, pp.55--60, June 2000.
{ Large Vocabulary Continuous Speech Recognition(LVCSR), Efficient search, prosodic information }

{SP2000-24} T. Otsuki and T. Ohtomo,
``Language Model Evaluation Method Based on Hamming Distance between Sentences,''
IEICE, SP2000-24, pp.61--66, June 2000.
{ speech recognition, language model, evaluation metrics, Hamming distance, perplexity }

{SP2000-25} A. Ito, H. Saito, M. Katoh and M. Kohda,
``Language Modeling by an Ergodic HMM based on an N-gram,''
IEICE, SP2000-25, pp.67--74, June 2000.
{ N-gram, Ergodic HMM, Language Model }

{SP2000-26} T. Saiin, M. Katoh, A. Ito and M. Kohda,
``Optimization of language model weight and insertion penalty for word graph generation,''
IEICE, SP2000-26, pp.75--82, June 2000.
{ LVCSR, word graph, rescoring, language model weight, insertion penalty }

{SP2000-27} A. Hosoi, K. Matumoto and H. Morikawa,
``A Tree Searched Vector Quantization in the Speech Analysis Synthesis System Based on the Pole-Zero Model,''
IEICE, SP2000-27, pp.1--8, July 2000.
{ tree searched vector quantization, B tree, speech analysis synthesis system }

{SP2000-28} P. Boonpramuk and T. Funada,
``A new method for generating driving source for speech conversion by using sequential processing,''
IEICE, SP2000-28, pp.9--16, July 2000.
{ Sequential Processing, AR Model, ARUI Model, Mutual Information Quantity }

{SP2000-29} H. Morikawa, N. Tubokawa and Y. Yanagi,
``Modeling and Analysis of Pitch Pattern of Speech Based on a Smoothing Spline Function,''
IEICE, SP2000-29, pp.17--24, July 2000.
{ pitch pattern, smoothing spline function, accent, prosodic control }

{SP2000-30} H. Miyabayashi and T. Funada,
``A Study on Performance of Pitch Extraction by BPFP-NN Method under Pitch Variation and Noise,''
IEICE, SP2000-30, pp.25--32, July 2000.
{ pitch extraction, U/V detection, bank of bandpass filter pairs, NN }

{SP2000-31} N. Nishizawa, N. Minematsu and K. Hirose,
``Development of a formant-based analysis-synthesis system and liquid sound synthesis,''
IEICE, SP2000-31, pp.33--40, July 2000.
{ terminal analogue synthesis, source model, ARX model, pitch synchronous processing, liquid sound }

{SP2000-32} M. Mori, S. Taniguchi, H. Sakamoto and T. Koizumi,
``Noise robustness of Speaker adapted Recognition system,''
IEICE, SP2000-32, pp.1--6, July 2000.
{ isolated word recognition, speaker adaptation, noise robustness, subword, LVQ, HMM }

{SP2000-33} Y. Morita and T. Funada,
``Noise Suppression of Speech Signal Using Neural Network Vector Quantization (NNVQ),''
IEICE, SP2000-33, pp.7--14, July 2000.
{ speech coding, neural network, vector quantization, LSP, noise suppression }

{SP2000-34} N. Kanedera, T. Arai, M. Takahashi and T. Funada,
``Investigations on information of speech recognition and speaker identification in modulation spectrum,''
IEICE, SP2000-34, pp.15--22, July 2000.
{ modulation spectrum, modulation frequency, speech recognition, speaker identification }

{SP2000-35} M. Kitahara and S. Amano,
``Functional load of pitch accent for identifying words in Japanese,''
IEICE, SP2000-35, pp.23--30, July 2000.
{ homophones, pitch accent, distinction, word familiarity, functional load }

{SP2000-36} M. Miura, Y. Arako and M. Yanagida,
``The Precedence Effect by Harmonic Structured Sounds,''
IEICE, SP2000-36, pp.31--37, July 2000.
{ Precedence Effect, Missing Fundamental,Virtual Pitch }

{SP2000-37} K. Kondo and K. Nakagawa,
``Towards a Novel Japanese Intelligibility Test,''
IEICE, SP2000-37, pp.39--46, July 2000.
{ Subjective speech evaluation, intelligibility, phonetic features, minimal-pair rhyme test }

{SP2000-38} H. Nito, A. Hayashi and Y. Minami,
``The Development of Melody Perception in Infancy ---By the Use of the Head-turn Preference Procedure---,''
IEICE, SP2000-38, pp.47--52, July 2000.
{ Infant, melody preference, children's song, original melody, transformed melody }

{SP2000-39} K. Sakuraba, S. Imaizumi and K. Kakehi,
``Developmental Study of linguistic and para-linguistic characteristics in vocal expression of basic emotions ----- Vocal Affect Expression in /pikachu/ -----,''
IEICE, SP2000-39, pp.53--59, July 2000.
{ Preschoolers, School Age Children, Vocal Expression, Emotion, Acoustic Analysis }

{SP2000-40} H. Morikawa and T. Senda,
``Analysis of the Development of Consonant Articulation for Preschool and School Age Children -Acoustical and Perceptual study for Articulatory Distortion- ,''
IEICE, SP2000-40, pp.61--68, July 2000.
{ consonant articulation, misarticulation, duration, perceptual test }

{SP2000-41} H. Morikawa, S. Imaizumi, A. Hayashi, M. Yanagida, K. Matsuki and H. Fujisaki,
``Methodological Aspect in the Study of Development of Spoken Language Acquisition for Preschool and School Age Children with Normal and Disorder of Speech,''
IEICE, SP2000-41, pp.69--77, July 2000.
{ spoken language acquisition, misarticulation, speech perception, disorder }

{SP2000-42} A. Takagi, H. Takahashi, N. Uemi and T. Ifukube,
``Efficacy of Auditory Feedback used for Rehabilitation of Aphasic Patients with verbal apraxia,''
IEICE, SP2000-42, pp.1--7, Aug. 2000.
{ aphasia, verbal apraxia, auditory feedback, mora, foot }

{SP2000-43} T. Suzuki, M. Fujimori and N. Tamura,
``On The Depression State and The Acoustic Parameters of Sentence Reading,''
IEICE, SP2000-43, pp.9--16, Aug. 2000.
{ speech analysis, depression, psychoacoustics, nonparametric test, BDI }

{SP2000-44} N. Uemi, Y. Sugai, M. Hashiba and T. Ifukube,
``Some problems and an improvement method of an electrolarynx with a pitch control function,''
IEICE, SP2000-44, pp.17--22, Aug. 2000.
{ Substitute speech method, Electrolarynx, Expiration pressure, Pitch frequency, Speech }

{SP2000-45} H. Honda, T. Kamae, T. Watanabe, T. Koide, S. Uno and T. Kurihara,
``Utilization of Windows applications by Voiced Scripting Host System,''
IEICE, SP2000-45, pp.23--28, Aug. 2000.
{ Windows, Visually Disabled, Text to Speech, WSH, VBScript, Editor }

{SP2000-46} T. Watanabe, K. Inoue, M. Sakamoto, H. Honda and T. Kamae,
``Bilingual Emacspeak for Windows; A speaking Emacs with bilingual rich expressive power,''
IEICE, SP2000-46, pp.29--36, Aug. 2000.
{ visually impaired, Windows, text-to-speech, Emacs, Japanese and English, audio formatting }

{SP2000-47} T. Watanabe, M. Sakajiri, C. Sashida and S. Okada,
``A Survey of Windows Accessibility by visually-Impaired PC Users,''
IEICE, SP2000-47, pp.37--42, Aug. 2000.
{ Visually-Impaired People, Windows, PC, User Survey, Evaluation of Text-to-Speech Synthesizer }

{SP2000-48} F. Amano,
``Technology Trend of Speech Signal Processing and Expectation for the Future,''
IEICE, SP2000-48, pp.1--7, Sep. 2000.
{ Speech CODEC, Noise Canceler, Speech Recognition/Synthesizer, Sub-band Filter, DSP }

{SP2000-49} N. Ogura and I. Yamada,
``A Relaxation of the Hybrid Steepest Descent Method for Wider Class of Inverse Problems,''
IEICE, SP2000-49, pp.9--12, Sep. 2000.
{ Hybrid steepest descent method, asymptotically shrinking nonexpansive mapping, fixed point set, convexly constrained inverse problem, convex projection method }

{SP2000-50} P. Zeng and T. Hirata,
``Fast Defect Detection in Cloths with B-splines,''
IEICE, SP2000-50, pp.13--20, Sep. 2000.
{ B-spline transform, wavelet transform, edge detection, defect detection }

{SP2000-51} S. Kamimura, X. Zhang, T. Yoshikawa and Y. Takei,
``Wavelet-based Image Coding Using Complex Allpass Filters,''
IEICE, SP2000-51, pp.21--28, Sep. 2000.
{ Image Coding, Wavelet, Complex Allpass Filter, IIR Filter }

{SP2000-52} M. Shino, Y. Choi and K. Aizawa,
``Wavelet Domain Digital Watermarking Based on Threshold-variable Decision,''
IEICE, SP2000-52, pp.29--34, Sep. 2000.
{ Watermark, Wavelet coeffients, probability, BER }

{SP2000-53} Y. Choi and K. Aizawa,
``Watermark Estimation Based on Error Probabilities,''
IEICE, SP2000-53, pp.35--42, Sep. 2000.
{ Watermark, probability, BER, Correlation }

{SP2000-54} K. Nagato, T. Yoshikawa, X. Zhang and Y. Takei,
``A Design Method for Linear-Phase FIR Digital Filters Using Extended Waveform Moments,''
IEICE, SP2000-54, pp.43--50, Sep. 2000.
{ waveform moment, extended waveform moment, linear-phase FIR filter, differential coefficients }

{SP2000-55} Y. Sugita and N. Aikawa,
``Design method of stable IIR filters with an arbitrary magnitude response and phase response,''
IEICE, SP2000-55, pp.51--57, Sep. 2000.
{ Successive Projections Method, IIR Filters, Minimax approximation, Rouche's Theorem }

{SP2000-56} N. Kudoh and Y. Tadokoro,
``Performance Analysis of a Fourier Coefficient Estimation Method Using IIR-BPFs and an LMS Algorithm and Improvement of its performance,''
IEICE, SP2000-56, pp.1--8, Sep. 2000.
{ IIR Notch Filter, LMS Algorithm, NFT, Non-harmonic Signal, Down sampling }

{SP2000-57} T. Fujii and T. Shimamura,
``IIR type equalizer using autoregressive type adaptive prefilter,''
IEICE, SP2000-57, pp.9--16, Sep. 2000.
{ IIR type adaptive equalizer, FIR type adaptive equalizer, Wiener filter, prefilter, stability }

{SP2000-58}  K. Noguchi, Y. Tadokoro and N. Kudoh,
``Study on Adaptive Frequency Estimation for Unknown Frequencies Using Delay Time Control with Notch Filters,''
IEICE, SP2000-58, pp.17--24, Sep. 2000.
{ delay time control, notch filter, unknown frequencies, zero-crossing, adaptive frequency estimation }

{SP2000-59} T. Miwa, Y. Tadokoro and T. Saito,
``The Problems of Transcription using Comb Filters for Musical Instrument Sounds and Their Solutions,''
IEICE, SP2000-59, pp.25--32, Sep. 2000.
{ transcription, pitch estimation, musical instrument discrimination, comb filter }

{SP2000-60} M. Yamaguchi, T. Miwa and Y. Tadokoro,
``A Study of Transcription of Musical Voice Sound using Parallel Connected Comb Filters,''
IEICE, SP2000-60, pp.33--38, Sep. 2000.
{ musical voice sound, comb filter, transcription }

{SP2000-61} D. Yamada, N. Minematsu and S. Nakagawa,
``Comparison of Speech Recognition Performance between STRAIGHT-Cepstrum and FFT-Cepstrum,''
IEICE, SP2000-61, pp.39--46, Sep. 2000.
{ STRAIGHT, cepstrum, fundamental frequency, speech recognition }

{SP2000-62} M. Moroto and H. Matsumoto,
``Evaluation of Mel-LPC Analysis by a Large Vocabulary Continuous Speech Recognition,''
IEICE, SP2000-62, pp.47--54, Sep. 2000.
{ Mel-LPC, mel-cepstrum, dictation system, monosyllable HMM }

{SP2000-63} K. Yoshida, K. Takagi and K. Ozeki,
``Speaker Recognition Using Multi-SNR Subband GMM under Noisy Environments,''
IEICE, SP2000-63, pp.55--61, Sep. 2000.
{ multi-SNR, subband GMM, speaker identification, text-independence, likelihood recombination weight }

{SP2000-64} K. Kashino, T. Kurozumi and H. Murase,
``A Quick Search Method Coping With Multiple Kinds of Feature Distortions,''
IEICE, SP2000-64, pp.63--69, Sep. 2000.
{ time-series search, time-series active search, feature distortion, OR search }

{SP2000-65} K. Onishi, M. Kobayakawa, M. Hoshi and T. Ohmori,
``A Feature independent of bit rate for Twin VQ Audio Retrieval,''
IEICE, SP2000-65, pp.71--77, Sep. 2000.
{ audio database, contents based retrieval, Twin VQ audio coding, audio features independent of bit rate }

{SP2000-66} S. Ogata and T. Shimamura,
``Wiener Filter Using Improved LPC,''
IEICE, SP2000-66, pp.79--84, Sep. 2000.
{ LPC, Wiener filter, spectrum estimation, noise reduction }

{SP2000-67} T. Takahashi, K. Tokuda, T. Kobayashi and T. Kitamura,
``Vector Quantization of Mel-cepstral Coefficients Using Distortion Measure for Spectral Analysis,''
IEICE, SP2000-67, pp.85--90, Sep. 2000.
{ multi-stage vector quantization, vector quantization, statistical measure, mel-cepstral analysis }

{SP2000-68} T. Ishida and  Y. Yamashita,
``F0 Contour Generation Using a Stochastic Model of Segmented Patterns,''
IEICE, SP2000-68, pp.1--8, Oct. 2000.
{ segmented F0 pattern, stochastic model, clustering, speech synthesis, F0 generation network  }

{SP2000-69} M. Kawamata, M. Yamamoto, S. Itahashi, H. Ohmura and K. Tanaka,
``Classification of Nonlinear Parameter Patterns due to Vocal Folds Vibration,''
IEICE, SP2000-69, pp.9--14, Oct. 2000.
{ formant type speech synthesis, formant energy dumping, vibration of vocal folds, nonlinear term }

{SP2000-70} S. Hiroya, S. Mashimo, T. Masuko and T. Kobayashi,
``A Very Low Bit-Rate Speech Coder Using Mixed Excitation,''
IEICE, SP2000-70, pp.15--20, Oct. 2000.
{ mel-generalized cepstrum, mixed excitation, vector quantization using statistics of static and dynamic features, instantaneous frequancy, speech coding }

{SP2000-71} T. Masuko, K. Tokuda and T. Kobayashi,
``Imposture Using Synthetic Speech against Text-prompted Speaker Verification Based on Spectrum and Pitch,''
IEICE, SP2000-71, pp.21--26, Oct. 2000.
{ text-prompted speaker verification, pitch, imposture, speech synthesis }

{SP2000-72} Y. Ishikawa,
``Prosodic Control for Japanese Text-to-Speech Synthesis,''
IEICE, SP2000-72, pp.27--34, Oct. 2000.
{ Speech Synthesis, Text-to-Speech Systems, Prosody, Prosodic Control, Intonation, rhythm }

{SP2000-73} M. Abe,
``An introduction to speech synthesis units,''
IEICE, SP2000-73, pp.35--42, Oct. 2000.
{ Speech synthesis, Text-to-Speech, Speech synthesis units }

{SP2000-74} K. Tokuda,
``FUNDAMENTALS OF SPEECH SYNTHESIS BASED ON HMM,''
IEICE, SP2000-74, pp.43--50, Oct. 2000.
{ speech synthesis, Text-to-Speech Translation, hidden Markov model, HMM, corpus }

{SP2000-75} K. Takeda,
``[Survey] Robust Speech Recognition through Multiple Observations and their Integration,''
IEICE, SP2000-75, pp.1--6, Dec. 2000.
{ Speech recognition, Multi-Stream }

{SP2000-76} T. Fukuda, M. Takigawa and T. Nitta,
``Peripheral Features for Speech Recognition,''
IEICE, SP2000-76, pp.7--12, Dec. 2000.
{ Speech Recognition, Feature Extraction, Cepstrum Analysis, Orthogonal Basis, Mapping Operator, Peripheral Feature }

{SP2000-77} J. Chen, K.K. Paliwal, T. Matsui, K. Yao, K.P.Markov and S. Nakamura,
``LONG-TERM EFFECT REMOVAL FOR NOISY SPEECH RECOGNITION,''
IEICE, SP2000-77, pp.13--18, Dec. 2000.
{ speech recognition, noise subtraction, long-term power spectrum, noise estimation }

{SP2000-78} M. Fujimoto and Y. Ariki,
``Speech Recognition under Non-stationary Noizy Environments Using Signal Estimation Method Based on Speech State Transition Model,''
IEICE, SP2000-78, pp.19--24, Dec. 2000.
{ noisy speech, recognition, non-stationary noise, speech state transition model, Kalman filter }

{SP2000-79} K. Yao, T. Matsui and S. Nakamura,
``Extended Kalman Particle filters applied to model-based noise compensation for noisy speech recognition,''
IEICE, SP2000-79, pp.25--30, Dec. 2000.
{ Speech recognition, Noise compensation, State space model, Kalman filter, Monte-Carlo method, Particle filter }

{SP2000-80} T. Nishiura, S. Nakamura and K. Shikano,
``Evaluation of Sound Source Discrimination Based on HMMs Using a Microphone Array,''
IEICE, SP2000-80, pp.31--36, Dec. 2000.
{ Microphone array, Sound source discrimination, HMM, Talker localization, Speech recognition, RWCP-DB }

{SP2000-81} H. Tanaka,
``Understanding Natural Language and Controling Robot Actions \ From Speech Recognition to Speech Understanding \,''
IEICE, SP2000-81, pp.37--42, Dec. 2000.
{ natural language understanding, action control, software robot, dialogue, speech understanding }

{SP2000-82} N. Kadotani, H. Aso, M. Suzuki and S. Makino,
``An investigation on discrimination among emotion expressions contained in speech,''
IEICE, SP2000-82, pp.43--48, Dec. 2000.
{ fundamental frequency, emotion discrimination, discriminant analysis }

{SP2000-83} K. Fujinaga, M. Nakai, H. Shimodaira and S. Sagayama,
``Multiple-Regression HMM for Speech Variability,''
IEICE, SP2000-83, pp.49--54, Dec. 2000.
{ speech recognition, hidden Markov model, multiple-regression model, F0, adaptation }

{SP2000-84} N. Watanabe, T. Yamada, F. Asano and N. Kitawaki,
``Voice Activity Detection Using Non-speech Models and HMM Composition,''
IEICE, SP2000-84, pp.55--60, Dec. 2000.
{ Voice activity detection, Viterbi alignment, non-speech models, HMM composition }

{SP2000-85} Y. Akita and T. Kawahara,
``Automatic Archiving System for Meeting Speech,''
IEICE, SP2000-85, pp.61--66, Dec. 2000.
{ speech recognition, meeting speech, speaker identification, key-phrase detection, archive, minutes }

{SP2000-86} K. Kumatani, S. Nakamura and K. Shikano,
``An Adaptive Integration Method Based on Product HMM for Bi-modal Speech Recognition,''
IEICE, SP2000-86, pp.67--72, Dec. 2000.
{ Bi-modal speech recognition, HMM, Stream weight, MCE, GPD algorithm }

{SP2000-87} K. Murai, K. Noma, K. Kumatani, T. Matsui and S. Nakamura,
``A Robust End Point Detection by Speaker's Facial Image,''
IEICE, SP2000-87, pp.73--78, Dec. 2000.
{ Speech Recognition, Speaking Section, Facial Image, Skin Color, End Point Detection }

{SP2000-88} T. Imai,
``[Survey] Search in Speech Recognition,''
IEICE, SP2000-88, pp.79--82, Dec. 2000.
{ speech recognition, search }

{SP2000-89} S. Yoshizawa, A. Baba, K. Matsunami, Y. Mera, M. Yamada and K. Shikano,
``Unsupervised training based on the sufficient HMM statistics from selected speakers,''
IEICE, SP2000-89, pp.83--88, Dec. 2000.
{ acoustic model, speaker adaptation, sufficient statistics, unsupervised }

{SP2000-90} M. Nishida and Y. Ariki,
``Phoneme Recognition by Speaker Normalization based on Projection to Phonetic Space,''
IEICE, SP2000-90, pp.89--94, Dec. 2000.
{ speaker independent speech recognition, speaker normalization, speaker recognition, subspace method, phonetic space, speaker space }

{SP2000-91} A. Lee, T. Kawahara and K. Shikano,
``State Selection using Context-Independent HMM for Fast Likelihood Calculation,''
IEICE, SP2000-91, pp.95--100, Dec. 2000.
{ LVCSR, Gaussian selection, state selection, PTM }

{SP2000-92} O. Segawa, K. Takeda and F. Itakura,
``Continuous Speech Recognition without End-point Detection,''
IEICE, SP2000-92, pp.101--106, Dec. 2000.
{ Continuous speech recognition, End-point detection, Search algorithm, Speech monitoring }

{SP2000-93} M. Katoh, T. Saiin, A. Ito and M. Kohda,
``Optimization of the Parameter Set for Word Graph Generation,''
IEICE, SP2000-93, pp.107--112, Dec. 2000.
{ LVCSR, Hidden Markov Network, word graph, rescoring, optimize parameters }

{SP2000-94} J. Ogata and Y. Ariki,
``A Comparison of Confidence Measures for Improved Speech Recognition,''
IEICE, SP2000-94, pp.113--118, Dec. 2000.
{ LVCSR, confidence measure, word graph, word posterior probability }

{SP2000-95} T. Kawahara,
``Toward Spontaneous and Conversational Speech Recognition,''
IEICE, SP2000-95, pp.1--6, Dec. 2000.
{ speech recognition, spontaneous speech, conversational speech, acoustic model, language model }

{SP2000-96} T. Shinozaki, Y. Saito, C. Hori and S. Furui,
``Toward Spontaneous Speech Recognition,''
IEICE, SP2000-96, pp.7--12, Dec. 2000.
{ spontaneous speech recognition, national project, lectures, interviews, discussion, OOV words, acoustic backing-off }

{SP2000-97} K. Kato, H. Nanjo and T. Kawahara,
``Acoustic and Language Models for Lecture Speech Recognition,''
IEICE, SP2000-97, pp.13--18, Dec. 2000.
{ automatic speech recognition, lecture speech, spontaneous speech, statistical acoustic model, statistical language model }

{SP2000-98} K. Okuda, T. Matsui and S. Nakamura,
``Robust speech recognition for stressed Japanese speech,''
IEICE, SP2000-98, pp.19--24, Dec. 2000.
{ speech recognition, speaking style, stressed speech, error recovery, multi-model }

{SP2000-99} S. Homma, A. Kobayashi, S. Sato, T. Imai and A. Ando,
``An Examination of Speech Recognition for News Commentary,''
IEICE, SP2000-99, pp.25--30, Dec. 2000.
{ broadcast news, news commentary, speech recognition, language model, acoustic model }

{SP2000-100} M. Watanabe, F. Masui, A. Kawai and T, Shiino,
``Conversational ellipsis and its complement,''
IEICE, SP2000-100, pp.31--36, Dec. 2000.
{ conversation, ellipsis, complement, utterance pair }

{SP2000-101} Y. Nishi and Y. Shirai,
``The semantics of aspectual marker -teiru --The analysis of conversational corpus-- ,''
IEICE, SP2000-101, pp.37--42, Dec. 2000.
{ semantics of -teiru, situation aspect, viewpoint aspect, temporal constraints, conversational corpus }

{SP2000-102} A. Ando,
``A Simultaneous Subtitling System for Broadcast News Programs,''
IEICE, SP2000-102, pp.43--48, Dec. 2000.
{ Broadcast news, Live captioning system, Speech recognition, Recognition error correction }

{SP2000-103} S. Yamamoto,
``Present state and future works of spoken language translation technologies,''
IEICE, SP2000-103, pp.49--54, Dec. 2000.
{ Spoken language translation, Speech recognition, Machine translation }

{SP2000-104} H. Koiso, N. Tsuchiya, Y. Mabuchi, M. Saito, T. Kagomiya, H. Kikuchi and K. Maekawa,
``Transcription Criteria for the Corpus of Spontaneous Japanese,''
IEICE, SP2000-104, pp.55--60, Dec. 2000.
{ spoken corpus, spontaneous speech, monologue, transcription criteria }

{SP2000-105} N. Kawaguchi, S. Matsubara, Y. Wakamatsu, M. Kajita, K. Takeda, F. Itakura and Y. Inagaki,
``Design and Characterization of In-Car Speech Corpus,''
IEICE, SP2000-105, pp.61--66, Dec. 2000.
{ speech corpus, spoken dialogue, in-car speech, moving car environment, driver's speech }

{SP2000-106} A. Ito and M. Kohda,
``Statistical Language Model Toolkit for Word and Class N-gram,''
IEICE, SP2000-106, pp.67--72, Dec. 2000.
{ word n-gram, class n-gram, statistical language model toolkit, perplexity }

{SP2000-107} M. Araki, T. Ono, K. Ueda, T. Nishimoto and Y. Niimi,
``Development of XML-VoiceXML Converter,''
IEICE, SP2000-107, pp.73--78, Dec. 2000.
{ XML, VoiceXML, spoken dialogue system, dialogue library }

{SP2000-108} H. Murao, N. Kawaguchi, S. Matsubara and Y. Inagaki,
``Example-based Spoken Dialogue System,''
IEICE, SP2000-108, pp.79--84, Dec. 2000.
{ Spoken dialogue, Speech recognition, Language understanding, Car environment }

{SP2000-109} Sung-Ill Kim, K. Nakajima, K. Nakamura, M. Nambu, Y. Higashi, T. Fujimoto and T. Tamura,
``Interactive System for Encouraging Utterance of the Elderly,''
IEICE, SP2000-109, pp.85--90, Dec. 2000.
{ Interactive System, Speech Recognition, Elderly, Welfare, Day Care }

{SP2000-110} N. Inoue,
``Language Model and Spoken Dialogue,''
IEICE, SP2000-110, pp.91--96, Dec. 2000.
{ Language Model, Spoken Dialogue System, Dialogue Model }

{SP2000-111} K. Yasuda, F, Sugaya, T. Takezawa, S. Yamamoto and M. Yanagida,
``An automatic evaluation method of translation capability by DP matching using similar expressions queried from a parallel corpus,''
IEICE, SP2000-111, pp.97--102, Dec. 2000.
{ Speech translation system, Translation system, Automatic evaluation of translation, Parallel corpus, DP matching }

{SP2000-112} S. Isogai, K. Shirai, H. Yamamoto and Y. Sagisaka,
``The efficient method of automatic clustering for Multi-Class Trigrams,''
IEICE, SP2000-112, pp.103--108, Dec. 2000.
{ Class N-gram, Automatic Clustering, Multi-Class N-gram, DUAME Language Modeling }

{SP2000-113} O. Iwase, T. Morimoto and K. Shudo,
``Incorporating Collocation Date into a Statistical Language Model,''
IEICE, SP2000-113, pp.109--114, Dec. 2000.
{ collocation date, statistical language model, test-set-perplexity }

{SP2000-114} H. Kashima and T. Kawahara,
``Speech Understanding Based on Key-Phrase Spotting and Combined Language Models,''
IEICE, SP2000-114, pp.115--120, Dec. 2000.
{ dialogue system, speech understanding, key-phrase spotting, descriptive grammar, word N-gram }

{SP2000-115} J. Hirasawa, N. Miyazaki and K. Aikawa,
``Detection of Misunderstandings in Spoken Dialogue System using System-User Utterance Sequence,''
IEICE, SP2000-115, pp.121--126, Dec. 2000.
{  }

{SP2000-116} C. Hori and S. Furui,
``Improvements in Automatic Speech Summarization Based on Stochastic Dependency Context Free Grammar,''
IEICE, SP2000-116, pp.127--, Dec. 2000.
{ speech summarization, word significance measure, linguistic likelihood, confidence measure, stochastic dependency context free grammar, dynamic programming }

{SP2000-117} Y. Zhang and M. Yanagida,
``Study in Reformation Phenomena of Tone Patterns and Their Rules in Chinese Trisyllabic Words,''
IEICE, SP2000-117, pp.1--8, Jan. 2001.
{ Trisyllabic words, Tone, Co-articulation }

{SP2000-118} H. Yanagase and Y. Yamashita,
``Automatic Scoring of Prosody in English Learner's Speech Based on Utterance Comparison,''
IEICE, SP2000-118, pp.9--16, Jan. 2001.
{ CALL, Prosody, Automatic labeling, Private lesson, F0 contour }

{SP2000-119} K. Tanno and J. Miwa,
``A Scoring System for Pronunciation of Japanese Special Mora on the Internet,''
IEICE, SP2000-119, pp.17--24, Jan. 2001.
{ Japanese learning, Evaluation of speaking skills, Special mora speech, CALL, Internet, Java servlet }

{SP2000-120} H. Sasaki and J. Miwa,
``A Computer Assisted Learning System for Pronunciation of Japanese Word Accent on the Internet,''
IEICE, SP2000-120, pp.25--32, Jan. 2001.
{ Japanese pronunciation education, Word accent type, Fundamental frequency, CALL, Internet, Java servlet }

{SP2000-121} C.T.Ishi, R. Nishide, N. Minematsu and K. Hirose,
``On the construction of a CALL system to train Japanese accent and intonation,''
IEICE, SP2000-121, pp.33--40, Jan. 2001.
{ Accent types, Intonation, F0 target, Pronunciation teaching systems }

{SP2000-122} N. Yoshioka, D. Nagahata, M. Yanagida and I. Nakayama,
``Differences among Vowels used in Noh, Kyogen and European Classical Singing,''
IEICE, SP2000-122, pp.1--8, Jan. 2001.
{ Noh, Kyogen, European Classical Singing, vowel, LPC cepstrum, Bel Canto }

{SP2000-123} Y. Yahata and K. Yamaguchi,
``Speaker Independent Speech Recognition Using Speaker Clustering Based on Vocal Tract Length,''
IEICE, SP2000-123, pp.9--16, Jan. 2001.
{ speech recognition, HMM, vocal tract length, maximum-likelihood estimation, speaker clustering }

{SP2000-124} T. Shimizu, H. Yoshimura, T. Namiki, T. Yamaguchi, N. Isu and H. Sugata,
``An inprovement on the clearness of explosive consonant for LSP-VCV speech synthesis,''
IEICE, SP2000-124, pp.17--24, Jan. 2001.
{ Speech synthesis by rule, LSP analysis, VCV unit, Residual signal, Explosive consonant }

{SP2000-125} Y. Tsubota, M. Dantsuji and T. Kawahara,
``English Pronunciation Instruction System using Pair-wise Classifiers between Japanese Error Patterns,''
IEICE, SP2000-125, pp.25--32, Jan. 2001.
{ CALL, speech recognition, Pairwise discriminant analysis, error patterns of Japanese learners, articulatory instruction }

{SP2000-126} S. Kobayashi, T. Tanaka, K. Mori and S. Nakagawa,
``Automatic Construction of CALL materials from TV News Programs with Captions,''
IEICE, SP2000-126, pp.33--40, Jan. 2001.
{ CALL, Language learning material, Listening, TV-News, Caption }

{SP2000-127} N. Nakamura, N. Minematsu and S. Nakagawa,
``Automatic Estimation of Pronunciation Habits Using a Single Word Utterance with HMM,''
IEICE, SP2000-127, pp.41--48, Jan. 2001.
{ Word stress, Pronunciation habit, Stressed syllable detection, HMM, English CAI, Triangular representation }

{SP2000-128} F. Sakaguchi, J. Ogata and Y. Ariki,
``Pronunciation Evaluation and Mistake Detection of Spoken Word in English Learning,''
IEICE, SP2000-128, pp.49--56, Jan. 2001.
{ CALL, pronunciation evaluation, word mistakes, forced alignment }

{SP2000-129} D. Honda, Y, Tomiyama, A. Motoki, M. Shimizu and M. Dantsuji,
``Research of English Composition CALL System for Junior High School Student with the Vocal Output,''
IEICE, SP2000-129, pp.57--64, Jan. 2001.
{ CALL, English composition, interactive learning assistance, case frame, semantic feature, vocal output }

{SP2000-130} I. Nakayama,
``An introduction of vocal samples sung by Japanese traditional and western classical-style singing, using a common verse,''
IEICE, SP2000-130, pp.1--4, Feb. 2001.
{ singing in Japanese, common verse, Japanese traditional singing, western classical-style singing, vocal expressions, vocal samples }

{SP2000-131} K. Arimoto and S. Yoshikawa,
``Correlation Between Velocity Profile and Disturbance-Wave Amplification Factor of the Jet in Air-Reed Instruments,''
IEICE, SP2000-131, pp.5--11, Feb. 2001.
{ Air-Reed, Jet, Flue, Velocity Profile, Disturbance }

{SP2000-132} S. Yoshikawa and Y. Muto,
``WAVE PROPAGATION ON THE HORN PLAYER'S UPPER LIP AND ITS MODELING,''
IEICE, SP2000-132, pp.13--18, Feb. 2001.
{ brass instruments, lip wave, lip opening, surface wave, shear modulus }

{SP2000-133} K. Hirata and R. Hiraga,
``Ha-Hi-Hun: Incremental performance synthesis system based on 2-stage performance rendering method,''
IEICE, SP2000-133, pp.19--26, Feb. 2001.
{ Music, Performance Rendering, Performance Synthesis, Knowledge Representation, Music Theory, Case-Based Reasoning }

{SP2000-134} M. Muto, I. Handa, H. Hibi, S. Sakai and H. Tanaka,
``Musical Morphing by Parallel Processing of Musical Components,''
IEICE, SP2000-134, pp.27--34, Feb. 2001.
{ Musical Morphing, Psychological Similarity, Multidimensional Scaling, Neural Networks, Song Synthesis }

{SP2000-135} T. Hikichi, N. Osaka and F. Itakura,
``Physical Modeling of the Sho,''
IEICE, SP2000-135, pp.35--42, Feb. 2001.
{ sho, physical model, free-reeds, simulation, outward-striking oscillation }

{SP2000-136} M. Kato, A. Nishimura and Y. Ando,
``Frequency measurements of flute tones by analytic signal,''
IEICE, SP2000-136, pp.43--50, Feb. 2001.
{ analytic signal, musical tone analysis, frequency modulation, amplitude modulation, noise component }

{SP2000-137} A. Sato, J. Ogawa, Y. Horino and H. Kitakami,
``A Discussion about the Realization of Impression-based Retrieval System for Music Collection,''
IEICE, SP2000-137, pp.51--56, Feb. 2001.
{ Database, Impression-based retrieval system, Music information }

{SP2000-138} H. Hashiguchi, T. Nishimura, H. Yabe, T. Akasaka and R. Oka,
``A Study for integrating melody and phone retrieval functions,''
IEICE, SP2000-138, pp.57--62, Feb. 2001.
{ music retrieval, extraction of music scale, phoneme recognition, Continuous Dynamic Programming }

{SP2000-139} T. Nishimura, J. Takita, M. Goto and R. Oka,
``Speedup of a Time-sequence Retrieval Method For Musical Audio Signals by Integrating Similar Melodic Intervals,''
IEICE, SP2000-139, pp.63--70, Feb. 2001.
{ Time-Sequence Pattern Retrieval, Music Retrieval, Similar Interval Retrieval }

{SP2000-140} H. Imagawa, K. Sakakibara, T. Konishi, E.Z.Murano and S. Niimi,
``Throat Singing Synthesis by a Laryngeal Voice Model Based on Vocal Fold and False Vocal Fold Vibrations,''
IEICE, SP2000-140, pp.71--78, Feb. 2001.
{ Throat singing, Vocal fold vibration, False vocal fold, Inverse filtering, Laryngeal voice model, Self-oscillating model }

{SP2000-141} K. Kawata and S. Iwamiya,
``Survey on the sound environment of supermarket,''
IEICE, SP2000-141, pp.79--86, Feb. 2001.
{ Supermarket, Sound Environment, Questionnaire Survey, Muzak, Public Address, Cluster Analysis }

{SP2000-142} I. Handa, M. Muto, H. Hibi, S. Sakai and H. Tanaka,
``Required components for man-machine music transcription system,''
IEICE, SP2000-142, pp.1--6, Feb. 2001.
{ music transcription, man-machine system, interface }

{SP2000-143} T. Matsushima, K. Tsuboi and S. Simura,
``A Support System for Shakuhachi Tablature Production and Publication,''
IEICE, SP2000-143, pp.7--14, Feb. 2001.
{ Shakuhachi Tablature, Desktop Music System, Character Code, Music Description Language, Japanese Traditional Music }

{SP2000-144} K. Joe, K. Horio and K. Matunaga,
``The control of time base information using BUI(Breathed User Interface),''
IEICE, SP2000-144, pp.15--18, Feb. 2001.
{ BUI(Breathed User Interface), time base information, vane wheel, exhalation, MAX/msp, interaction }

{SP2000-145} W. Nakagawa, K. Kurakawa and K. Nakakoji,
``CAPADY: An Interactive System based on a Congnitive Model of Musical Composition,''
IEICE, SP2000-145, pp.19--26, Feb. 2001.
{ human computer interaction, music data processing, composition support, expressive feature production }

{SP2000-146} K. Fukui, Y. Horiuchi and A. Ichikawa,
``The Accompaniment System with Feedback,''
IEICE, SP2000-146, pp.27--34, Feb. 2001.
{  }

{SP2000-147} J. Murase and M. Nakanishi,
``Sound Source Identification Process of Polyphony using Neural Network,''
IEICE, SP2000-147, pp.35--42, Feb. 2001.
{ Music Information Processing, Sound Source Identification, Neural Network, Harmonic Structure }

{SP2000-148} H. Kawahara and H. Katayose,
``Scat generation system based on STRAIGHT: a versatile speech analysis, modification and synthesis system,''
IEICE, SP2000-148, pp.43--50, Feb. 2001.
{ fundamental frequency, analysis and resynthesis, voicing, hearing, music }

{SP2000-149} K. Nishigaki, J. Dang and K. Honda,
``Application of the inverse estimation method with a physiological articulatory model to analyze speech style,''
IEICE, SP2000-149, pp.51--58, Feb. 2001.
{ inverse estimation, Physiological articulatory model, Speech style, Speech production, Speech synthesis }

{SP2000-150} S. Takano and K. Honda,
``Muscle length measurement during vowel production based on Magnetic Resonance Images,''
IEICE, SP2000-150, pp.59--66, Feb. 2001.
{ vowel production, muscle length measurement, tongue, MRI }

{SP2000-151} K. Ishizuka and K. Aikawa,
``Effect of Spectral Structure of Noises on Noisy Vowel Perception,''
IEICE, SP2000-151, pp.67--72, Feb. 2001.
{ noisy environment, phoneme perception, acoustical feature, spectral structure, hearing experiment }

{SP2000-152} H. Yasuki and S. Iwamiya,
``Feasibility of Detecting Deception through Voice Analysis,''
IEICE, SP2000-152, pp.73--77, Feb. 2001.
{ Criminal Investigation, Detecting Deception, Voice Pitch, Responce Time }

{SP2000-153} T. Kitamura, N. Suzuki, H. Saito, K. Michi, T. Takahashi, M. Akagi and M. Wakumoto,
``Three-dimensional analysis of vocal tract using MRI: Cases with tongue and mouth floor resection,''
IEICE, SP2000-153, pp.1--6, Mar. 2001.
{ MRI, 3D vocal tract shape, tongue and mouth floor resection, asymmetry }

{SP2000-154} H. Nishimoto, M. Akagi, T. Kitamura, N. Suzuki, H. Saito, K. Michi and T. Takahashi,
``FEM analyses of Three-Dimensional Vocal Tract Models After Tongue and Mouth Floor Resection,''
IEICE, SP2000-154, pp.7--14, Mar. 2001.
{ Disease of Oral Cavity, MRI, 3-D Vocal Tract Model, FEM, Transfer Function of Vocal Tract }

{SP2000-155} I. Furuya, K. Mori, Y. Minagawa-Kawai and R. Hayashi,
``Cerebral Lateralization of Speech Processing in Infants Measured by Near-Infrared Spectroscopy,''
IEICE, SP2000-155, pp.15--20, Mar. 2001.
{ Near-infrared spectroscopy(NIRS), Non-invasive functional brain mapping, Phoneme contrast, Pitch contrast, Infants, Auditory cortex }

{SP2000-156} R. Hayashi, S. Imaizumi, S. Niimi, K. Mori and S. Ueno,
``Neural processes for pitch-accent and intonation,''
IEICE, SP2000-156, pp.21--28, Mar. 2001.
{ prosody, pitch-accent, intonation, MEG }

{SP2000-157} T. Ito, S. Ise and H. Ishida,
``Pinnae's contribution in head related transfer function(HRTF).,''
IEICE, SP2000-157, pp.29--36, Mar. 2001.
{ haed related transfer function, spatial perception, sound localization, localization cue, first notch }

{SP2000-158} H. Sawada and K. Kakehi,
``Adaptation of Utterance Behavior to Delayed Auditory Feedback,''
IEICE, SP2000-158, pp.37--42, Mar. 2001.
{ delayed auditory feedback, speech, stuttering, utterance latency }

{SP2000-159} Y. Sato and K. Kahehi,
``Characteristics of Eye Movement in Oral Reading under the Delayed Auditory Feedback(DAF) Condition,''
IEICE, SP2000-159, pp.43--49, Mar. 2001.
{ eye movements, DAF(delayed auditory feedback), artificial stuttering, oral reading }

{SP2000-160} T. Tanaka, T. Masuko and T. Kobayashi,
``A Study on Pitch Extraction Technique Based on Instantaneous Frequency Amplitude Spectrum,''
IEICE, SP2000-160, pp.1--8, Mar. 2001.
{ instantaneous frequency, harmonic components, harmonic structure index, band selection, variable window length, fundamental frequency }

{SP2000-161} N. Minematsu, K. Tsuda and K. Hirose,
``Quantitative analysis of correlation between spectral envelopes of Japanese speech and their fundamental frequencies,''
IEICE, SP2000-161, pp.9--16, Mar. 2001.
{ spectral envelope, fundamental frequency, articulation effects, voiced consonant, unvoiced consonant, multivariate regression analysis }

{SP2000-162} T. Shimizu, H. Yoshimura, T. Namiki, N. Isu and H. Sugata,
``Analysis of Japanese vowel by using sandglass type neural network,''
IEICE, SP2000-162, pp.17--24, Mar. 2001.
{ sandglass type neural network, LSP analysis, vowel, formant }

{SP2000-163} K. Kondo, R. Izumi and K. Nakagawa,
``Initial Evaluation of a Novel Japanese Intelligibility Test,''
IEICE, SP2000-163, pp.25--32, Mar. 2001.
{ subjective speech evaluation, intelligibility, phonetic features, minimal-pair rhyme test, familiarity }

{SP2000-164} S. Takeda, Y. Nishizawa and Ghen Ohyama,
``Some Considerations of Prosodic Features of "Anger" Expressions,''
IEICE, SP2000-164, pp.33--40, Mar. 2001.
{ Speech, Emotion, Prosody, Anger, Speech Rate, Fundamental Frequency }

{SP2000-165} M. Kasamatsu, T. Nishimoto, M. Araki and Y. Niimi,
``Synthesizing an emotional voice using Prosodic-Balanced VCV Database,''
IEICE, SP2000-165, pp.41--46, Mar. 2001.
{ Emotional voice, Speech synthesis, TD-PSOLA, Prosodic-balanced database }

{SP2000-166} K. Sakuraba, S. Imaizumi, K. Kakehi and D. Erickson,
``Phonetic Constraints of Japanese and English Emotional Expressions in Children: Acoustic Analysis of /pikachu/ in Japanese and English,''
IEICE, SP2000-166, pp.47--54, Mar. 2001.
{ Cultural difference, Vocal Expression, Emotion, Acoustic Analysis, Phonetic Constraints }

{SP2000-167} K. Kanamori and N. Suto,
``The Cross-Dimensional Interference Effect in same or different judgement of Complex Tones. `The Dependency on the Harmonic Components in Rating the Similarity of Timbre`,''
IEICE, SP2000-167, pp.55--61, Mar. 2001.
{ Timbre, Pitch, Similarity, Harmonic Components, Interference effect }

{SP2000-168} A. Tanaka,
``Interaction between segmental and pitch information in short-term memory of auditory stimuli,''
IEICE, SP2000-168, pp.63--70, Mar. 2001.
{ pitch, working memory, rehearsal, prosody, melody }