SP Subject Index 1999

{SP99-1} R. Mochizuki, H. Nishimura, T. Minowa and Y. Arai,
``A Construction Method of VCV-Segment Database for Waveform Concatenation Synthesis,''
IEICE Technical Report, SP99-1, pp.1--8, May 1999.
{ Waveform concatenation synthesis, VCV-segment database, Target pitch pattern, Pitch modification }

{SP99-2} N. Mizusawa, J. Murakami and M. Higashida,
``Simple Word Synthesis by Concatenating Syllabic Components based on Positional Features with Mora Length,''
IEICE Technical Report, SP99-2, pp.9--16, May 1999.
{ word synthesis, slot filling method, syllable, prosodic features, mora position, mora length }

{SP99-3} J. Chen and N. Campbell,
``Speech Synthesis Evaluation by Objective Distance measures,''
IEICE Technical Report, SP99-3, pp.17--22, May 1999.
{ speech synthesis, concatenation, evaluation, objective measures, perceptual distances }

{SP99-4} Y. Meron and k. Hirose,
``Weight Traning for Selection Based Synthesis,''
IEICE Technical Report, SP99-4, pp.23--30, May 1999.
{ Speech synthesis, selection synthesis, parameter estimation, regression training }

{SP99-5} S. Takano and M. Abe,
``A New Speech Synthesis Method based on Vocoder Preserving Fine Structure of Magnitude Spectrum,''
IEICE Technical Report, SP99-5, pp.31--38, May 1999.
{ text-to-speech synthesis, harmonics modification, fundamental frequency, vocoder }

{SP99-6} M. Matsuda and H. Kasuya,
``Acoustic Property and Synthesis of Whispery Voice,''
IEICE Technical Report, SP99-6, pp.39--46, May 1999.
{ whisper, formantfrequency, laryngeal endoscope, MRI, electric circuit model, voice synthesis }

{SP99-7} M. Abe, H. Mizuno, O. Mizuno, Y. Noda and S. Nakajima,
``Sesign99: A speech design tool,''
IEICE Technical Report, SP99-7, pp.47--52, May 1999.
{ Speech synthesis, prosodic parameters, Multimedia }

{SP99-8} Y. Shiraki,
``The Yang-Mills equations and Speaker adaptation,''
IEICE Technical Report, SP99-8, pp.53--60, May 1999.
{ Interpolation, the Yang-mills equation, a moduli, Dirac operator, flat connection, dynamic measure, locally-linear interpolation, Laplacian, IFIS }

{SP99-9} H. Kawanami and K. Hirose,
``Analysis and Synthesis of Speech Rate Control in Japanese Dialogue speech based on Prosodic Structure,''
IEICE Technical Report, SP99-9, pp.1--8, May 1999.
{ dialogue speech, speech rate, prosodic structure, prosodic rule, speech synthesis }

{SP99-10} K. Maekawa and N. Kitagawa,
``Production and Perception of the Paralinguistic Information in Speech  - A Multidimensional Scaling Analysis -,''
IEICE Technical Report, SP99-10, pp.9--16, May 1999.
{ Paralinguistic information, Identification experiment, Similarity, Multidimensional scaling, Multiple regression analysis }

{SP99-11} S. Kitagawa and N. campbell,
``The relation of prosodic characteristics to focal prominence in Japanese read speech,''
IEICE Technical Report, SP99-11, pp.17--22, May 1999.
{ pitch, duration, focus extraction, auto extraction }

{SP99-12} N. Minematsu, Y. Fujisawa and S. Nakagawa,
``Prosodic Evaluation of English words Based upon Estimating Learners' Pronunciation Habits,''
IEICE Technical Report, SP99-12, pp.23--30, May 1999.
{ pronunciation habit, prosodic evaluation, stressed/unstressed syllable HMM, optimal weight, CAI for language learning }

{SP99-13} K. Nomura, T. Kawahara and S. Doshita,
``Segmentation of Lecture Speech using Prosodic Information,''
IEICE Technical Report, SP99-13, pp.31--38, May 1999.
{ segmentation, prosodic information, Fo, sentence boundaries, filled pause }

{SP99-14} M. Kamiya and H. Ariizumi,
``Deriving of the fundamental frequency contour generation process model which uses acceleration change control,''
IEICE Technical Report, SP99-14, pp.39--46, May 1999.
{ Rule synthesis, Fo pattern control, Mechanics model of larynx }

{SP99-15} A. Fujino, T. Kaburagi, M. Honda, E. Murano and S. Niimi,
``Analysis of timing between articulatory and glottal motions in voiceless consonant production,''
IEICE Technical Report, SP99-15, pp.47--54, May 1999.
{ Voiceless plosive, Voiceless fricative, Motion timing, Vocal-tract closure/constriction, Glottal opening }

{SP99-16} T. Kaburagi, M. Honda and T. Okadome,
``Trajectory formation of articulatory movements using multidimensional invariant-feature tasks,''
IEICE Technical Report, SP99-16, pp.55--62, May 1999.
{ Articulatory movement, Coarticulation, Invariant feature, Trajectory formation, Motor task }

{SP99-17} K. Ogata, M. Yamaguchi and Y. Sonoda,
``Articulatory modeling based on cascaded first-order systems and study of articulatory characteristics,''
IEICE Technical Report, SP99-17, pp.63--70, May 1999.
{ articulatory movements, articulatory model, cascaded first-order systems, timing, speaking rate, temporal properties of speech }

{SP99-18} M. Yamashita and M. Sugiyama,
``Evaluation of computer Security System using Speaker Recognition,''
IEICE Technical Report, SP99-18, pp.1--8, June 1999.
{ Speaker recognition, Computer security }

{SP99-19} M. Nishida and Y. Ariki,
``Speaker Verification based on Subspace Method Obtained from Gaussian Distribution,''
IEICE Technical Report, SP99-19, pp.9--16, June 1999.
{ speaker verification, speaker subspace, subspace method, Bayes discriminant, weighted Maharanobis distance }

{SP99-20} T. Sasaki,
``The emotion analysis algorithmlizm with vocal-analysis software uTrusterv
IEICE Technical Report, SP99-20, pp.17--23, June 1999.
{ voice recognition, psycological analysis, emotional-analysis, vocal-analysis }

{SP99-21} M. Kumagai and J. Miwa,
``Public Analysis System for Speech Using Techniques of E-mail and Web,''
IEICE Technical Report, SP99-21, pp.25--32, June 1999.
{ Speech Analysis, Internet, Public Use, E-mail, Web, Language Learning }

{SP99-22} T. Uchida and M. Sugiyama,
``Automatic Sampling Frequency Determination for Speech Data,''
IEICE Technical Report, SP99-22, pp.33--40, June 1999.
{ Sampling frequency, LPC cepstrum, Voice retrieval, Pattern recognition }

{SP99-23} K. Kashino and H. Murase,
``Quick AND/OR Search of Audio Signals Using Time-Series Active Search,''
IEICE Technical Report, SP99-23, pp.41--48, June 1999.
{ audio search, audio retrieval, prunning, time-series, active search }

{SP99-24} S. Makino,
``Mechanism of automatic speech recognition and understanding system,''
IEICE Technical Report, SP99-24, pp.49--56, June 1999.
{ speech recognition, speech understanding, current status, problems, future trends }

{SP99-25} M. Sugiyama,
``Retrieval of Acoustic Information,''
IEICE Technical Report, SP99-25, pp.27--64, June 1999.
{ Acoustic information, Multimedia processing, Acoustic segment, Searching methods }

{SP99-26} Y. Imai, N. Inoue, K. Hashimoto and M. Yoneyama,
``Detection Method of Repeated Speech for Unknown Words Processing,''
IEICE Technical Report, SP99-26, pp.1--6, June 1999.
{ speech input, information retrieval, unknown words detection, repeated speech } 

{SP99-27} K. Watanabe and M. Sugiyama,
``Time Alignment between Caption and Acoustic Signal for Automatioc caption Generation,''
IEICE Technical Report, SP99-27, pp.7--14, June 1999.
{ Video data, Caption, Time alignment, DP matching }

{SP99-28} M. Kondo, K. Takeda and F. Itakura,
``An waveform measure for predicting accuracy deterioration in noisy speech recognition,''
IEICE Technical Report, SP99-28, pp.15--20, June 1999.
{ speech recognition, speech quality, dynamic range }

{SP99-29} H. Singer,
``Unified Framework for Acoustic Topology Modelling: ML-SSS and Question-Based Decision Trees,''
IEICE Technical Report, SP99-29, pp.21--26, June 1999.
{ speech recognition, acoustic model, context-dependency, decision tree }

{SP99-30} S. Sato, T. Imai and A. Ando,
``Automatic Selection of Training Utterances to Improve an Acoustic Model,''
IEICE Technical Report, SP99-30, pp.27--32, June 1999.
{ Broadcast news, cation, speech recognition, acoustic model, database, HMM, training }

{SP99-31} K. Yamamoto, N. Iwai and S. Nakagawa,
``A Study on Differences of Speaking Style and Speech Recognition Performance,''
IEICE Technical Report, SP99-31, pp.33--40, June 1999.
{ speech recognition, dialogue speech, speaking style, speech rate, acoustic distance among phonemes }

{SP99-32} R. Nisimura, S Kajita, K. Takeda, F. Itakura and K. Shikano,
``Development of Speech Input System for Web-based applications,''
IEICE Technical Report, SP99-32, pp.41--48, June 1999.
{ internet, WWW, Speech input system, Speech recognition and  analyzing system }

{SP99-33} S. Nakagawa, N. Iwai and K. Yamamoto,
``Speech Recognition of News Speech Using Speaker Identification /Verification Techniques,''
IEICE Technical Report, SP99-33, pp.49--56, June 1999.
{ Speech Recognition, News speech, Speaker Adaptation, Speaker Recognition, Identification of speaker change }

{SP99-34} J. Ogata, S. Takao, K. Hasegawa and Y. Ariki,
``A Study on English-Japanese Mutual Retrieval for Broadcast News Articles,''
IEICE Technical Report, SP99-34, pp.57--64, June 1999.
{ LVCSR, article retrieval, word importance }

{SP99-35} K. Aoyama, M. Kawamori, M. Tamoto and K. Aikawa,
``Effect of Visual Information on Multimodal Dialogue,''
IEICE Technical Report, SP99-35, pp.65--71, June 1999.
{ Multimodal Dialogue, Visual Information, System's Responce, Human Interface }

{SP99-36} H. Yamamoto and Y. Sagisaka,
``PART-OF-SPEECH N-GRAM AND WORD N-GRAM FUSED LANGUAGE MODEL,''
IEICE Technical Report, SP99-36, pp.73--78, June 1999.
{ MAP Estimation, Class N-gram, Automatic Clustering }

{SP99-37} C. Hori, M. Katoh, A. Ito and M. Kohda,
``Construction and Evaluation of Language Models Based on Stochastic Context Free Grammar for Speech Recognition,''
IEICE Technical Report, SP99-37, pp.79--86, June 1999.
{ speech recognition, language model, Stochastic Context Free Grammar, Dependency grammar, Inside-Outside algorithm }

[SP99-38} M. Utiyama and H. Matsumoto,
``Statistical Language Model Incorporating Kana-Characters and Phrases,''
IEICE Technical Report, SP99-38, pp.87--94, June 1999.
{ statistical language model, speech recognition, outo of vocabulary word }

{SP99-39} A. Ito, M. Kohda and M. Ostendorf,
``A metric based on likelihood difference for n-gram language model evaluation,''
IEICE Technical Report, SP99-39, pp.95--102, June 1999.
{ stochastic language model, language model evaluation, perplexity, mixture language model }

{SP99-40} H. Kawahara, P. Zolfaghari, A. Cheveigne and R. Patterson,
``Source Information Extraction using Fixed Points of a Frequency to Instantaneous Frequency Map,''
IEICE Technical Report, SP99-40, pp.1--8, July 1999.
{ instantaneous frequency, fundamental frequency, fixed points, carrier to noise ratio } 

{SP99-41} N. Ono and S. Ando,
``Harmonics Extraction Using Differences of Instantaneous Frequency Between Neighboring Subbands,''
IEICE Technical Report, SP99-41, pp.9--16, July 1999.
{ instantaneous frequency, subbands, harmonic signals }

{SP99-42} K. Matsumoto and T. Funada,
``Word recognition using a filter for extraction of power spectrum derivative,''
IEICE Technical Report, SP99-42, pp.17--22, July 1999.
{ spectrum derivative, FTTSS, threshold operation, robustness }

{SP99-43} K. Yamamoto and S. Nakagawa,
``A Study on Speech Recognition Unit Based on Speech Perceptual Experiments,''
IEICE Technical Report, SP99-43, pp.23--30, July 1999.
{ speech recognition, speech perception, recognition unit, syllable, triphone }

{SP99-44} Y. Hayashi and K. Sekiyama,
``Development of Phonetic Categories : A test with degraded speech,''
IEICE Technical Report, SP99-44, pp.31--38, July 1999.
{ phonetic category, speech perception, development, intelligibility }

{SP99-45} R. Komaki, R. Yamada and Y. choi,
``Effects of native language on the perception of American English /r/ and /l/ : Cross-language comparison between Korean and Japanese,''
IEICE Technical Report, SP99-45, pp.39--46, July 1999.
{ speech perception, second language, speech learning, perceptual assimilation, cross-language comparison }

{SP99-46} M. Kitahara,
``Vowel Devoicing and the Function of Pitch Accent,''
IEICE Technical Report, SP99-46, pp.47--51, July 1999.
{ pitch accent, vowel devoicing, pitch raising, accent shift, function of accent }

{SP99-47} H. Sawada,
``The Effects of the Length of Sentence and Articulation of Initial Mola on Latency for Reading Aloud from Memory - As Factors of Stuttering -,''
IEICE Technical Report, SP99-47, pp.53--60, July 1999.
{ stuttering, pronouncing latency, articulation }

{SP99-48} E. Moreton and S. Amano,
``The Effect of Lexical Stratum Phonotactics on the Perception of Japanese Vowel Length,''
IEICE Technical Report, SP99-48, pp.1--8, July 1999.
{ phonotactics, lexical stratum, PEST, vowel boundary, perception }

{SP99-49} K. Ueda,
``Short-term auditory memory interference: Dual task load and streaming,''
IEICE Technical Report, SP99-49, pp.9--14, July 1999.
{ pitch recognition, serial recall, STRAIGHT resynthesized speech, dual task load, streaming }

{SP99-50} A. Callan and S. Masaki,
``Effects of visual character and vowel sound presentation on vowel perception and production,''
IEICE Technical Report, SP99-50, pp.15--22, July 1999.
{ multi-modal information processing, speech perception/production, reaction time }

{SP99-51} K. Sekiyama,
``Audiovisual speech perception under various signal-to-noise ratios: Asymmetry for different places of articulation,''
IEICE Technical Report, SP99-51, pp.23--30, July 1999.
{ audiovisual speech perception, the McGurk effect, signal-to-noise ratio, place of articulation }

{SP99-52} S. Toriyama, K. Manabe and H. Riquimaroux,
``Discrimination of coo sounds by Japanese monkey,''
IEICE Technical Report, SP99-52, pp.31--38, July 1999.
{ coo sounds, pitch perception, categorical perception, Japanese monkey }

{SP99-53} H. Habibzadeh and S. Kitazawa,
``Meddis IHC model based multi-channel auditory models setup and inversion,''
IEICE Technical Report, SP99-53, pp.39--45, July 1999.
{ Meddis model, Inner hair cell, Auditory model, Filter bank, Half wave rectifier, Reverse auditory model } 

{SP99-54} K. Ito and M. Akagi,
``Effect of temporally fluctuated impulses on the detection of the interaural time differnce,''
IEICE Technical Report, SP99-54, pp.47--52, July 1999.
{ ITD, coincidence detector circuit model, synaptic transmission, nervous impulse, temporal fluctuation }

{SP99-55} M. Sakamoto and T. Saitoh,
``An automatic pitch-marking method using a wavelet transform,''
IEICE Technical Report, SP99-55, pp.1--8, Aug. 1999.
{ pitch marking, wavelet transform, glottal closure instant, dynamic programming }

{SP99-56} M. Sakamoto and M. Yamada,
``A sound source segregation method using the harmonic structure of the human voice,''
IEICE Technical Report, SP99-56, pp.9--16, Aug. 1999.
{ sound segregation, Gabor wavelet, wavelet transform, auditory scene analysis }

{SP99-57} T. Fukabayashi and O. Nishimoto,
``Feature Extraction by Nonlinear Processing of Spectral Envelope,''
IEICE Technical Report, SP99-57, pp.17--24, Aug. 1999.
{ Nonlinear processing, Valley-cepstrum, Dynamic feature, -coefficients }

{SP99-58} T. Takara, T. Kuniyoshi, M. Kinoshita, K. Izumi and I. Nagayama,
``Study on pitch pattern and spectrum data to synthesize singing voice using cepstrum method.,''
IEICE Technical Report, SP99-58, pp.25--31, Aug. 1999.
{ Singing voice, Synthesis, Cepstrum, Pitch pattern }

{SP99-59} T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi and T. Kitamura,
``Simultaneous Modeling of Spectrum, Pitch and State Duration in HMM-based Speech Synthesis,''
IEICE Technical Report, SP99-59, pp.33--38, Aug. 1999.
{ HMM, text-to-speech synthtesis, mel-cepstrum, prosodic model, context clustering }

{SP99-60} T. Takara,
``Speech Recognition and Selection of the Structure of the Hidden Markov Model Using the Genetic Algorithm,''
IEICE Technical Report, SP99-60, pp.39--46, Aug. 1999.
{ Genetic algorithm, Hidden Markov model, Automatic speech recognition, Structure, Optimization }

{SP99-61} K. Tokuda,
``SPEECH SYNTHESIS BASED ON HIDDEN MARKOV MODELS,''
IEICE Technical Report, SP99-61, pp.47--54, Aug. 1999.
{ speech synthesis, hidden Markov model, dynamic feature, parameter generation }

{SP99-62} C. Miyajima, H. Watanabe, K. Tokuda, T. Kitamura and S. Katagiri,
``Speaker Recognition Based on Discriminative Feature Extraction -- Optimization of Mel-Cepstral Features,''
IEICE Technical Report, SP99-62, pp.1--8, Aug. 1999.
{ speaker recognition, discriminative feature extraction, mel-cepstral estimation, 2nd-order all-pass function }

{SP99-63} T. Uchibe, S. Kuroiwa and N. Higuchi,
``The Methods to Determine Threshold for Speaker Verification using Gain in Likelihood during Training Speaker Models,''
IEICE Technical Report, SP99-63, pp.9--14, Aug. 1999.
{ speaker verification, threshold, likelihood normalization, hidden Markov model }

{SP99-64} J. Furuyama and T. Kobayashi,
``Word Recognition using Partly-Hidden Markov Model,''
IEICE Technical Report, SP99-64, pp.15--20, Aug. 1999.
{ acoustic model, HMM, PHMM, word recognition }

{SP99-65} M. Yamaniha, E. Nagaki, T. Takara and I. Nagayama,
``Evolutional Selection of Structure of Continuous HMM Using Crossover and Mutation with Transitions' Set of a State,''
IEICE Technical Report, SP99-65, pp.21--26, Aug. 1999.
{ speech recognition, Continuous HMM, Genetic algorithm, Crossover, Mutation }

{SP99-66} K. Miyai and Y. Yamasita,
``Language Model Based on Clustering Word Histories,''
IEICE Technical Report, SP99-66, pp.27--34, Aug. 1999.
{ language model, clustering, word history, smoothing, HMM }

{SP99-67} Y. Abe, H. Itsui, Y. Maruta and K. Nakajima,
``Large Vocabulary continuous Speech Recognition Using a Two-Stage Search Method With Statistical Error Modeling,''
IEICE Technical Report, SP99-67, pp.35--42, Aug. 1999.
{ Large vocabulary continuous speech recognition, Multi-pass search, Statistical error model, One-stage DP, Best-first search, Statistical language model }

{SP99-68} N. Murai and T. Kobayashi,
``Dictation of multitalkers'conversation using statistical turn taking model and speaker model,''
IEICE Technical Report, SP99-68, pp.43--48, Aug. 1999.
{ Maltitalker, Conversational speech recognition, Stochastic turn taking Model, Stochastic speaker Model, Onepass Algorithm }

{SP99-69} S. Takao, J. Ogata, K. Hasegawa and Y. Ariki,
``English-Japanese Mutual Retrieval for Broadcast News Articles Based on Word Relational Space,''
IEICE Technical Report, SP99-69, pp.49--56, Aug. 1999.
{ LVCSR, article retrieval, word relational space, cross-language information retrieval }

{SP99-70} M. Ida, H. Mori, S. Nakamura and K. Shikano,
``Hands-Free Word Recognition in Real Noise Environment,''
IEICE Technical Report, SP99-70, pp.57--62, Aug. 1999.
{ speech recognition, real environment, hands-free, noise reduction, microphone array, spectral subtraction }

{SP99-71} Y. Okada, T. Nishiura, S. Nakamura, K. Shikano and T. Yamada,
``Speech Recognition using an Adaptive Microphone Array with Weighting of an Average Speech Spectrum,''
IEICE Technical Report, SP99-71, pp.63--68, Aug. 1999.
{ microphone array, AMNOR, speech recognition }

{SP99-72} T. Moriya,
``Principles of Speech and Audio Coding and Standardization,''
IEICE Technical Report, SP99-72, pp.1--6, Sep. 1999.
{ MPEG-4, ITU-T, speech coding, audio coding, adaptive process }

{SP99-73} H. Banno, K. Takeda and F. Itakura,
``A Study on Timbre Perception of Stimuli with Different Short-Time Phase,''
IEICE Technical Report, SP99-73, pp.7--14, Sep. 1999.
{ short-time phase, auditory perception, group delay, zero phase non-zero phase }

{SP99-74} H. Ehara, K. Yasunaga, K. Yoshida and T. Morii,
``An improved low-bit-rate speech coding based on small number pulse excited CELP,''
IEICE Technical Report, SP99-74, pp.15--21, Sep. 1999.
{ speech coding, CELP, low bit rate, Algebraic codebook, pulse }

{SP99-75} S. Kurita, H. Saruwatari, S. Kajita, K. Takeda and F. Itakura,
``Evaluation of Blind Signal Separation Using Directivity Pattern Under Reverberant Condition,''
IEICE Technical Report, SP99-75, pp.23--28, Sep. 1999.
{ Blind signal separation, Kullback-Leibler divergence, Microphone array, Directivity pattern, Reverberation }

{SP99-76} K. Nishi and S. Ando,
``IIR Comb Filters for Quasi-Periodic Signal Extraction with Robustness to Amplitude and Pitch Fluctuations,''
IEICE Technical Report, SP99-76, pp.29--34, Sep. 1999.
{ quasi-periodic signal, comb filter, Kalman filter, IIR digital filter, constant-Q }

{SP99-77} H. Saruwatari, S. Kajita, K. Takeda and F. Itakura,
``Speech Enhancement using Noise Adaptive Complementary Beamforming,''
IEICE Technical Report, SP99-77, pp.1--8, Sep. 1999.
{ speech enhancement, Microphone array, Complementary beamforming, Noise adaptation, Spectral subtraction }

{SP99-78} Z. Pan, K. Kotani and T. Ohmi,
``Extracting person's speech individually from original records of meeting by speaker identification technique,''
IEICE Technical Report, SP99-78, pp.9--13, Sep. 1999.
{ Extraction, individual's voice, Speech index, Speaker identification }

{SP99-79} Y. Shimizu, S. Kajita, K. Takeda and F. Itakura,
``Speech Recognition Based on Space Diversity Taking Room Acoustics into Acount,''
IEICE Technical Report, SP99-79, pp.15--20, Sep. 1999.
{ Speech Recognition, HMM, Room Acoustics, Distributed Multi-Microphone, Space Diversity }

{SP99-80} N. Tokui, K. Nakayama and A. Hirano,
``Stabilization of LMS Algorithm with Preceding Lattice Predictor Based Orthogonal Transformation,''  
IEICE Technical Report, SP99-80, pp.21--28, Sep. 1999.
{ Adaptive filter, Lattice predictor, orthogonal transform, LMS, reflective coefficients, weight coefficients }

{SP99-81} N. Kobayashi, N. Aikawa and M. Sato,
``A Design Method of Low Delay Lowpass FIR Filters with Maximally Flat Characteristics in the Passband and Equiripple Characteristics in the Stopband,''
IEICE Technical Report, SP99-81, pp.29--35, Sep. 1999.
{ FIR filters, Low delay, Maximally flat characteristics in the passband, Equiripple characteristics in the stopband, Remez algorithm }

{SP99-82} H. Sawada, N. Aikawa and M. Sato,
``A Design Method of Complex Coefficients ALL-Pass Filters,''
IEICE Technical Report, SP99-82, pp.37--42, Sep. 1999.
{ ALL-pass filters, complex coefficients, successive projections method, equiripple characteristics }

{SP99-83} A. Kawakami,
``A Decoupling Method for Two-Dimensional Digital Systems,''
IEICE Technical Report, SP99-83, pp.43--48, Sep. 1999.
{ decoupling, multi-input multi-output systems, two-dimensional systems, dynamical feedback }

{SP99-84} S. Yamasaki, A. Asano and H. Tanaka,
``An Iterative Decoding Technique Improving Mobile Multimedia Communication Quality,''
IEICE Technical Report, SP99-84, pp.49--54, Sep. 1999.
{ Mobile multimedia communication, MPEG-4 video, multimedia multiplexing, turbo codes }

{SP99-85} Y. Miyanaga,
``Autonomous-Agent Speech Recognition System and its VLSI System Design,''
IEICE Technical Report, SP99-85, pp.55--60, Sep. 1999.
{ Parallel and Concurrent Speech Processing, Autonomous-agent processing, Robust speech analysis }

{SP99-86} K. Sakuraba, S. Imaizumi and K. Kakehi,
``Acoustic Analysis of Emotional Expression in Children ----- Vocal Affect Expression in /pikatsu/ -----,''
IEICE Technical Report, SP99-86, pp.1--8, Oct. 1999.
{ Preschoolers, School Age Children, Vocal Expression, Emotion, Acoustic Analysis }

{SP99-87} T. Sakaguchi and T. Yoshida,
``Correlations between duration and fundamental frequency of syllables,''
IEICE Technical Report, SP99-87, pp.9--16, Oct. 1999.
{ text-to-speech synthesis, prosodic control, duration, Fo }

{SP99-88} T. Saito and M. Sakamoto,
``A Phonemic and Prosodic Labelling System Using a Text-to-Speech Synthesizer,''
IEICE Technical Report, SP99-88, pp.17--24, Oct. 1999.
{ text-to-speech, speech corpus, phonemic and prosodic labelling }

{SP99-89} H. Higashi and M. Kawamata,
``A Method of Pitch Modification by Speech Synthesis from Short-Time Fourier Transform Magnitude,''
IEICE Technical Report, SP99-89, pp.25--30, Oct. 1999.
{ pitch modification, LSEE-MSTFTM algorithm, sampling rate conversion, STFT magnitude, spectral envelope }

{SP99-90} K. Onoe, H. Segi, S. Sato, T. Imai and A. Ando,
``News speech recognition under various acoustic conditions including field reports,''
IEICE Technical Report, SP99-90, pp.31--36, Oct. 1999.
{ large-vocabulary continuous speech recognition, broadcasst news, captioning, acoustic environment }

{SP99-91} N. Seiyama,
``Basis of Speech Rate Conversion Technology,''
IEICE Technical Report, SP99-91, pp.37--44, Oct. 1999.
{ Speech rate conversion, Elderly, Broadcasting service, Practical use, Second language }

{SP99-92} K. Tsutsui,
``An Introduction to Audio Codecs,''
IEICE Technical Report, SP99-92, pp.45--52, Oct. 1999.
{ Waveform Coding, Psychoacoustical Coding, Masking Effect, QMF, PQF, MDCT }

{SP99-93} S. Nakagawa,
``Some Problems on Automatic speech Recognition,''
IEICE Technical Report, SP99-93, pp.1--6, Dec. 1999.
{ speech recognition, acoustic model, HMM, language model, N-gram }

{SP99-94} M. Nishimura,
``Japanese dictation system for now and the future,''
IEICE Technical Report, SP99-94, pp.7--12, Dec. 1999.
{ dictation, large-vocabulary continuous speech recognition, spontaneous speech, speech'understanding }

{SP99-95} Y. Fujisawa, N. Minematsu and S. Nakagawa,
``Comparison between HMM and DP matching in terms of their performance of detecting stressed syllables in English words,''
IEICE Technical Report, SP99-95, pp.13--18, Dec. 1999.
{ HMM, DP matching, judgment of stress patterns' identity, spectrum, power, pitch, duration }

{SP99-96} J. Zhang and K. Hirose,
``CHINESE TONE NUCLEI DETECTION BASED ON SEQUENTIAL CLUSTERING AND LINEAR DISCRIMINANT ANALYSIS,''
IEICE Technical Report, SP99-96, pp.19--24, Dec. 1999.
{ Tone-nucleus, Sequential clustering, linear discriminant analysis, Hypothesis test, Tone recognition }

{SP99-97} T. Nakajima, S. Okawa and K. Shirai,
``Evaluation of Sub-Band Features Based on Information Criterion for Multi-Band Speech Recognition,''
IEICE Technical Report, SP99-97, pp.25--30, Dec. 1999.
{ Multi-Band Speech Recogniton, Robust Speech Recognition, Sub-Band Feature, Conditional Entropy }

{SP99-98} S. Matsuda, M, Nakai, H. Shimodaira and S. Sagayama,
``Asynchronous Transition HMM with Sequential Constraints,''
IEICE Technical Report, SP99-98, pp.31--36, Dec. 1999.
{ Speech Recognition, Asychronous Transition HMM, State Tying along Time, Feature-Wise PEC }

{SP99-99} T. Kato, S. Kuroiwa, T. Shimizu and N. Higuchi,
``A Study on Tree-Based Clustering for Speaker-Independent Gaussian Mixture HMMs,''
IEICE Technical Report, SP99-99, pp.37--42, Dec. 1999.
{ speech recognition, acoustic modeling, triphone, Gaussian mixture, clustering }

{SP99-100} A. Lee, T. Kawahara, K. Takeda and K. Shikano,
``Phonetic Tied-Mixture Model for LVCSR,''
IEICE Technical Report, SP99-100, pp.43--48, Dec. 1999.
{ triphone, tied-mixture, phonetic tied-mixture, Gaussian pruning }

{SP99-101} T. Emori and K. Shinoda,
``Vocal Tract Length Normalization using Rapid Maximum-Likelihood Estimation for Speech Recognition,''
IEICE Technical Report, SP99-101, pp.49--54, Dec. 1999.
{ speech recognition, Hidden Markov Model, speaker normalization, vocal tract length, maximum-likelihood estimation }

{SP99-102} J. Kanou, M. Katoh, A. Ito and M. Kohda,
``A study on MLLR adapted speaker model for speaker verification,''
IEICE Technical Report, SP99-102, pp.55--60, Dec. 1999.
{ speaker verification, MLLR adaptation, regression cluster, Minimum Description Length criterion }

{SP99-103} Z. Pan, K. Kotani and T. Ohmi,
``A nonlinear cepstral compensation method for noisy speech processing,''
IEICE Technical Report, SP99-103, pp.61--65, Dec. 1999.
{ nonlinear average, cepstrum compensation, noisy speech }

{SP99-104} B. Li and K. Hirose,
``Application of Phone Pair Model to Robust Speech Recognition,''
IEICE Technical Report, SP99-104, pp.67--71, Dec. 1999.
{ speech recognition, robust, correlation, speaker adaptation, pair model, recognition system }

{SP99-105} M. Fujimoto and Y. Ariki,
``Noisy Speech Recognition Using Noise Reduction Method Based on Kalman Filter,''
IEICE Technical Report, SP99-105, pp.73--78, Dec. 1999.
{ noisy speech recogniton, noise reduction, fast Kalman Filter, real time processing }

{SP99-106} K. Miki, T. Nishiura, S. Nakamura and K. Shikano,
``Environmental Sound Discrimination Based on Hidden Markov Model,''
IEICE Technical Report, SP99-106, pp.79--84, Dec. 1999.
{ environmental sound, speech recognition, HMM }

{SP99-107} P. Heracleous, S. Nakamura and K. Shikano,
``An Improvement to 3-D N-best Search Using Path-Distance Based Clustering for Recognizing Multiple Sound Sources,''
IEICE Technical Report, SP99-107, pp.85--89, Dec. 1999.
{ speech recognition, distant-talking speech, multiple sound sources, microphone array }

{SP99-108} H. Nishizaki and S. Nakagawa,
``A Retrieval Method of Broadcast News Using Voice input Keywords,''
IEICE Technical Report, SP99-108, pp.91--96, Dec. 1999.
{ speech recognition, information retrieval, news speech, keywords, association degree }

{SP99-109} S. Takao, J. Ogata and Y. Ariki,
``Comparison of Retrieval Methods to News Speech,''
IEICE Technical Report, SP99-109, pp.97--102, Dec. 1999.
{ LVCSR, article retrieval, mutual information considering TF-IDF, word space model }

{SP99-110} C. hori and S. Furui,
``Automatic Speech Summarization Based on Word Significance and Linguistic Likelihood,''
IEICE Technical Report, SP99-110, pp.103--108, Dce. 1999.
{ speech summarization, word significance score, linguistic likelihood, dynamic programming }

{SP99-111} H. Matsumoto,
``Robust Speech Recognition Techniques in Real Environments,''
IEICE Technical Report, SP99-111, pp.109--114, Dec. 1999.
{ Robust speech recognition, HMM, Adaptation, Convolutional noise, Spectral subtraction }

{SP99-112} S. Nakamura,
``Hands-Free Speech Recognition in Real Environments,''
IEICE Technical Report, SP99-112, pp.115--120, Dec. 1999.
{ speech recognition, real environment, hands-free, distant-talking speech, microphone array, model adaptation, HMM composition }

{SP99-113} T. Kobayashi and K. Shirai,
``Multi-Modal Conversational Interface for Humanoid Robot,''
IEICE Technical Report, SP99-113, pp.121--126, Dec. 1999.
{ Spoken dialogue, Conversation robot, Humanoid robot, Group conversation }

{SP99-114} S. Furui,
``Speech Recognition under Ubiquitous/Wearable Computing Environment,''
IEICE Technical Report, SP99-114, pp.127--132, Dec. 1999.
{ ubiquitous computing, wearable computing, speech recognition, speech understanding, human interface }

{SP99-115} M. Haruno,
``Machine Learning Approach to Natural Language Processing,''
IEICE Technical Report, SP99-115, pp.1--6, Dec. 1999.

{SP99-116} H. Iida,
``Linguistic Theory and Language for Communication,''
IEICE Technical Report, SP99-116, pp.7--11, Dec. 1999.
{ Communication, primitive conversation, under-developed utterance, one-word utterance, indirect speech act theory }

{SP99-117} M. Enomoto and S. Tutiya,
``Overlapping and it's Statistical Analysis in the Japanese Map Task Dialogue Corpus,''
IEICE Technical Report, SP99-117, pp.13--18, Dec. 1999.
{ Overlap, Utterance function, Conversation, Communication, Dialogue Corpus }

{SP99-118} Y. Kawaguchi and S. Tutiya,
``On those aberrations from the turn-taking rules which can be accounted for by conversational implicature,''
IEICE Technical Report, SP99-118, pp.19--24, Dec. 1999.
{ spoken dialogue, turn, conversational implicature, overlap, co-operative plinciple, Japanese map task corpus }

{SP99-119} J. Hirasawa, N. Miyazaki, M. Nakano and K. Aikawa,
``Studies on How Users Respond to Misunderstandings of Spoken Dialogue Systems,''
IEICE Technical Report, SP99-119, pp.25--30, Dec. 1999.
{ spoken dialogue system, misunderstanding, error correction, verification utterance }

{SP99-120} T. Kosaka, T. Ueyama, A. Kushida, M. Yamada and Y. Komori,
``Client-server based speech recognition and its fast recognition algorithm using scalar quantization,''
IEICE Technical Report, SP99-120, pp.31--36, Dec. 1999.
{ speech recognition, speech coding, scalar quantization, client-server, fast recogniton algorithm, HMM }

{SP99-121} H. Singer, R. Gruhn, M. Naito, H. Tsukada, A. Nishino, A. Nakamura and Y. Sagisaka,
``Speech Translation Anywhere: Client-Server Based ATR-MATRIX,''
IEICE Technical Report, SP99-121, pp.37--41, Dec. 1999.
{ speech translation, speech recognition, system }

{SP99-122} N. Kitaoka, I. Akahori and S. Nakagawa,
``Confidence Measure and Rejection based on Correct Probability of Recognition Candidate,''
IEICE Technical Report, SP99-122, pp.43--48, Dec. 1999.
{ correct probability, confidence measure, likelihood ratio, variance of syllable's durations, rejection }

{SP99-123} K. Tanigaki, H. Yamamoto and Y. Sagisaka,
``CLASS DEPENDENT SUBWORD-MODELS FOR OUT-OF-VOCABULARY WORDS RECOGNITION,''
IEICE Technical Report, SP99-123, pp.49--54, Dec. 1999.
{ out of vocabulary, continuous speech recognition, subword model, proper nouns }

{SP99-124} N. Katoh, N. Uratani, T. Ehara and A. Ando,
``A New Language Model by using (n  4)-gram for Broadcast News Speech Transcription,''
IEICE Technical Report, SP99-124, pp.55--60, Dec. 1999.
{ language model, speech recognition, n-gram, task adaptation, Broadcast news, perplexity }

{SP99-125} K. Hanazawa and S. Sakai,
``Continuous Speech Recognition with Parse Filtering,''
IEICE Technical Report, SP99-125, pp.61--66, Dec. 1999.
{ statistical language models, grammatical knowledge, parsing, rescoring }

{SP99-126} N. Oka, M. Katoh, A. Ito and M. Kohda,
``Study on Large Vocabulary Continuous Speech Recognition with a phoneme graph based hypothesis restriction,''
IEICE Technical Report, SP99-126, pp.67--72, Dec. 1999.
{ LVCSR, hidden Markov network, search strategy, phoneme graph, 1-phoneme look-ahead }

{SP99-127} K. Iwano and K. Hirose,
``Use of Prosodic Word Boundary Information for Unlimited-Vocabulary Speech Recognition,''
IEICE Technical Report, SP99-127, pp.73--78, Dec. 1999.
{ Continuous Speech Recognition, Unlimited-Vocabulary Speech Recognition, Prosodic Information, Prosodic Word Boundary Detection,''

{SP99-128} A. Matsui, N. Kato, A. Kobayashi, T. Imai, H. Tanaka and A. Ando,
``Improvement Methods for Broadcast News Transcription System Using the Latest News Manuscripts,''
IEICE Technical Report, SP99-128, pp.79--84, Dec. 1999.
{ broadcast news transcription, news manuscripts }

{SP99-129} T. Imai, A. Kobayashi, S. Sato and A. Ando,
``Broadcast News Transcription System with a Progressive 2-Pass Decoder,''
IEICE Technical Report, SP99-129, pp.85--90, Dec. 1999.
{ speech recognition, broadcast news, search, early decision }

{SP99-130} S. Itahashi, H. Fujisaki, S. Yamamoto, F. Itakura, S. Furui, K. Hirose, H. Tanaka and A. Ichikawa,
``Present State and Future of Large Scale Projects on Speech and Language Processing,''
IEICE Technical Report, SP99-130, pp.91--105, Dec. 1999.
{ speech, language, dialogue, prosody, spoken language }

{SP99-131} Y. Den and S. Masaki,
``Effects of Japanese word accent patterns on speech production,''
IEICE Technical Report, SP99-131, pp.1--5, Jan. 2000.
{ accent patern of Japanese word, speech production, reaction time, articulatory movement }

{SP99-132} D. Yoshioka, T. Yonezaki, J. Lu, S. Nakamura and K. Shikano,
``Low Bit Rate Speech Coding Based on STRAIGHT Speech Analysis/Synthesis Method,''
IEICE Technical Report, SP99-132, pp.7--12, Jan. 2000.
{ Speech Coding, STRAIGHT, LSP, Multi-Stage VQ }

{SP99-133} T. Nishiura, S. Nakamura and K. Shikano,
``Multiple Beam-forming utilizing Reflection signals in real environments,''
IEICE Technical Report, SP99-133, pp.13--20, Jan. 2000. 
{ real environment, microphone array, multiple beam-forming, reflection sounds }

{SP99-134} M. Shigenaga,
``Characteristic Features of Emotionally Uttered Speech Revealed by Discriminant Analysis (VII) on Open Discrimination,''
IEICE Technical Report, SP99-134, pp.21--28, Jan. 2000.
{ emotion, prosodic features, discrimination of emotions, discriminant analysis }

{SP99-135} K. Okumura, A. Narukawa, Y. Watanabe and M. Yanagida,
``Adaptations of Speech Recognition to Stuttered Utterances,''
IEICE Technical Report, SP99-135, pp.29--36, Jan. 2000.
{ Stuttering, DP matching, Word recognition, Continuous speech recognition }

{SP99-136} S. Dai, T. Hirohku and N. Toyota,
``The Lyapunov Spectrum and the chaotic property in speech sounds,''
IEICE Technical Report, SP99-136, pp.37--43, Jan. 2000.
{ Chaos, Speech signal, The analysis of False Nearest Neighbor, Lyapunov spectrum }

{SP99-137} M. Nishida and Y. Ariki,
``Speaker Verification by Integrating Dynamic and Static Features Using Subspace Method,''
IEICE Technical Report, SP99-137, pp.1--6, Jan. 2000.
{ speaker verification, subspace method, speaker eigenspace, dynamic and static features for speaker }

{SP99-138} S. Yamazaki, M. Tanimoto, H. Shibayama and K. Fukunaga,
``Generation method of study data by using pole in LPC analysis,''
IEICE Technical Report, SP99-138, pp.7--12, Jan. 2000.
{ Haar function, LPC-cepstrum, local peak, Neural network, Study data }

{SP99-139} M. Ihara, T. Akasaka and R. Oka,
``COMPARISON OF FEATURES OF MEL-CEPSTRUM AND SPECTRUM VECTOR FIELDS IN PHONEME RECOGNITION BASED ON THE BAYES DISCRIMINATION FUNCTION,''
IEICE Technical Report, SP99-139, pp.13--20, Jan. 2000.
{ Bayesian, Phoneme Recognition, Voice Recognition, Voice Retrieval }

{SP99-140} M. Mimura and T. Kawahara,
``Acoustic models for dialogue speech recognition,''
IEICE Technical Report, SP99-140, pp.21--28, Jan. 2000.
{ dialogue speech, context-dependent triphone HMM, decision tree clustering }

{SP99-141} M. Takano,
``Automatic Creation of Mixture Structure by Successive State Split,''
IEICE Technical Report, SP99-141, pp.29--34, Jan. 2000.
{ acoustic model, HMM, mixture distribution, SSS, topology }

{SP99-142} J. Ogata and Y. Ariki,
``A Study on Network Structure of Lexical Tree Search,''
IEICE Technical Report, SP99-142, pp.35--40, Jan. 2000.
{ LVCSR, Lexical tree search, back-off bigram, word graph }

{SP99-143} K. Miyai and Y. Yamashita,
``Clustering Word Histories for Language Model,''
IEICE Technical Report, SP99-143, pp.41--48, Jan. 2000.
{ language model, clustering, word history, smoothing }

{SP99-144} S. Zhang, H. Yamamoto and Y. Sagisaka,
``Linkgram Language Modeling,''
IEICE Technical Report, SP99-144, pp.49--54, Jan. 2000.
{ Linkgram, Link Features, Maximum Entropy, Language Modeling }

{SP99-145} Y. Hirose, K. Itou, K. Shikano and S. Nakamura,
``Reading-based Language Model in Japanese Dictation System,''
IEICE Technical Report, SP99-145, pp.55--63, Jan. 2000.
{ Japanese dictation, language model, word coverage, unknown word }

{SP99-146} K. Okada, Y. Horiuchi and A. Ichikawa,
``Metatopic foreseeing and the Synthesis Generation Corresponded with it in Voice Dialogue System,''
IEICE Technical Report, SP99-146, pp.65--72, Jan. 2000.
{ MetaTopic, SuperTopic, Status Transfer Probability Table }

{SP99-147} T. Hashimoto and Y. Umetani,
``Numerical Simulation of Piano Sounds - Toward the Combined Analysis of Strings, Bridge and Soundboard -,''
IEICE Technical Report, SP99-147, pp.1--6, Feb. 2000.

{SP99-148} T. Taguti and Y. Tohnai,
``Acoustical analysis on the sawari tone of Chikuzen biwa,''
IEICE Technical Report, SP99-148, pp.7--14, Feb. 2000.

{SP99-149} A. Miwa and S. Morita,
``Sound Source Separation for a Trio on Active Environment,''
IEICE Technical Report, SP99-149, pp.15--20, Feb. 2000.
{ Sound source separation, Active environment, Stereo music signal }

{SP99-150} I. Handa, T. Kinoshita, M. Muto, S. Sakai and H. Tanaka,
``Intelligent music transcription system with human assistance,''
IEICE Technical Report, SP99-150, pp.21--26, Feb. 2000.
{ music transcription, man-machine system }

{SP99-151} M. Hamanaka, M. Goto and N. Otsu,
``A Learning Session System: Statistical Modeling of Player's Reactions,''
IEICE Technical Report, SP99-151, pp.27--34, Feb. 2000.

{SP99-152} N. Iwakami, T. Moriya, A. Jin, T. Mori and K. Chikira,
``Segmental Intensity Expanding(Sinex) Audio Coding using a Signal Categorization Technique,''
IEICE Technical Report, SP99-152, pp.1--6, Feb. 2000.

{SP99-153} T. Moriya, T. Mori, N. Iwakami and A. Jin,
``An Error-Robust Scalable Coder Based on MPEG-4 Twin VQ,''
IEICE Technical Report, SP99-153, pp.7--12, Feb. 2000.

{SP99-154} T. Hirano and J. Miwa,
``Estimation of Vocal Tract Shape Using A-b-S Method by Articulatory-Acoustic Transformation for Spoken Language Learning,''
IEICE Technical Report, SP99-154, pp.13--18, Feb. 2000.
{ Spoken language learning, Estimation of vocal tract shape, Articulatory-acoustic transformation, Analysis by synthesis, 11 dimensional vocal tract model, Typical value }

{SP99-155} M. Urushibara, R. Hiraga and S. Igarashi,
``Performance Visualization and its Analysis based on Music Structure,''
IEICE Technical Report, SP99-155, pp.19--23, Feb. 2000.

{SP99-156} T. Kawakami, M. Nakai, H. Shimodaira and S. Sagayama,
``Hidden Markov Model Applied to Automatic Harmonization of Given Melodies,''
IEICE Technical Report, SP99-156, pp.25--32, Feb. 2000.
{ Hidden Markov Model, automatic harmonization, tonality estimation }

{SP99-157} H. Yamasaki, N. Babaguchi and T. Kitahashi,
``Synchronizing Closed Caption Stream with Speech Stream in Video Data,''
IEICE Technical Report, SP99-157, pp.33--38, Feb. 2000.
{ intermodal collaboration, crossmodal retrieval, closed caption, vowel detection, formant frequency }

{SP99-158} T. Terada, M. Tsukamoto and S. Nishio,
``Active Karaoke: A System for Generating Background Scenes of Karaoke Using an Active Database System,''
IEICE Technical Report, SP99-158, pp.39--44, Feb. 2000.

{SP99-159} K. Ito, K. Sakakibara, R. Aoki and N. Osaka,
``Analysis of Noh and composition of Noh-Opera using a sinusoidal model,''
IEICE Technical Report, SP99-159, pp.45--48, Feb. 2000.

{SP99-160} K. Niu, Y. Yamashita, M. Wakumoto, S. Imai, Y. Ishino, N. Suzuki and K. Michi,
``Simultaneous Analysis of Mandibular and Tongue Movement, \ Observation of Articulatory Movement in Normal Adults \''
IEICE Technidal Report, SP99-160, pp.1--8, March 2000.
{ lingual-palatal contact, Electropalatography, mandibular movement, Sirognathograph }

{SP99-161} G. Ohyama, K. Kitamura and H. Miura,
``A Study on the Perception of Accent of Whispered Speech Using Synthesized Vowel,''
IEICE Technical Report, SP99-161, pp.9--16, March 2000.
{ voice, whisper, pitch, perception, accent, formant frequency }

{SP99-162} E. Ogata, R. Hayashi, S. Imaizumi, N. Hirata and K. Mori,
``Neural Processing Mechanism for Rendaku and Accent Rules in Compound Word Recognition,''
IEICE Technical Report, SP99-162, pp.17--24, March 2000.
{ Compound word, Rendaku rule, Accent rule, N400, MEG }

{SP99-163} G. Ohyama, T. Kai, H. Matsumori, Y. Itou and H. Miura,
``A Basic Study on Binaural Effect Using Synthesized Vowel,''
IEICE Technical Report, SP99-163, pp.25--32, March 2000.
{ perception, synthesized vowel, binaural, hearing test, disorder, speech }

{SP99-164} S. Takano, M. Tsuzaki and H. Kato,
``Audio-Visual integration for perception of the temporal structure of speech -in the case of degradeed auditory stimuli-,''
IEICE Technical Report, SP99-164, pp.33--40, March 2000.
{ speech perception, time perception, perceptual sensitivity, audio-visual interaction, degraded speech }

{SP99-165} J. Lu, N. Uemi and T. Ifukube,
``Tone Modifications Used for Improving the Discrimination of Mandarin Words for the hearing-impaired,''
IEICE Technical Report, SP99-165, pp.41--46, March 2000.
{ Digital hearing aid, tone modification, Pitch frequency, Four tones }

{SP99-166} G. Kawai,
``Acoustic models and phonological rules for automatically recognizing and training deaf children's speech,''
IEICE Technical Report, SP99-166, pp.47--53, March 2000.
{ deaf children, pronunciation training, phonological rules, speech recognition }

{SP99-167} T. Ishida and K. Kakehi,
``The mutual effects of two sound sources of different frequency bandwidth each motion perception,''
IEICE Technical Report, SP99-167, pp.1--7, March 2000.
{ Sound source movements, Multiple sound sources, Direction of two moving sound sources, illusion }

{SP99-168} H. Kitakaze and M. Akagi,
``Discrimination of fine fluctuation in fundamental frequency contour,''
IEICE Technical Report, SP99-168, pp.9--16, March 2000.
{ fundamental frequency, fine fluctuation, modulation frequency, deviation, singing voice }

{SP99-169} Y. Ishimoto and M. Akagi,
``A fundamental frequency estimation method and noise reduction for noisy speech,''
IEICE Technical Report, SP99-169, pp.17--24, March 2000.
{ fundamental frequency estimation, noisy environment, comb filter, TEMPO2, noise reduction }

{SP99-170} Y. Atake, T. Irino, H. Kawahara, L. Jinlin, S. Nakamura and K. Shikano,
``A new pitch extraction method using instantaneous frequencies of harmonic compornents,''
IEICE Technical Report, SP99-170, pp.25--32, March 2000.
{ fundamental frequency, fixed point, harmonic components, STRAIGHT, bandwidth equation }

{SP99-171} H. Kawahara and Y. Atake,
``Vocal fold closure and speech event detection using group delay,''
IEICE Technical Report, SP99-171, pp.33--40, March 2000.
{ group delay, fundamental period, vocal fold closure, minimum phase, electroglottograph }

{SP99-172} S. Fujita, K. Okada, Y. Horiuchi and A. Ichikawa,
``Examination for the Performance of the Matching Part of the Voice Dialogue System which merge the Prosody/ Intonation Analisis and the Sentence Foreseeing,''
IEICE Technical Report, SP99-172, pp.41--48, March 2000.
{ Hidden Markov Model, Syntactical Tree, Intonation Tree }