SP Subject Index 1998

{SP98-1} S. Kobayashi and S. Kitazawa,
``Factors Concerning Paralinguistic Feature Identification in Natural Dialogue,''
IEICE Technical Report, SP98-1, pp.1--8, Apr. 1998.
{ Transcription, Paralinguistic Features, Paralinguistic, Acoustic Difference }

{SP98-2} Y. Horiuchi, M. Takahashi and A. Ichikawa,
``Prosodic Structure in Spontaneous Speech,''
IEICE Technical Report, SP98-2, pp.9--16, Apr. 1998.
{ Prosody, Fundamental Frequency, Pause, Spontaneas Speech, Dialogue Corpus } 

{SP98-3} A. Lee, T. Kawahara and S. Doshita,
``JULIUS  - a Japanese LVCSR Engine using Word Trellis Index,''
IEICE Technical Report, SP98-3, pp.17--24, Apr. 1998.
{ LVCSR, Search Algorithm, Multi-Pass Search, Trellis, Word Graph }

{SP98-4} Y. Hirose, A. Kai and S. Nakagawa,
``Dealing with filled pauses and out-of-vocabulary words in a spontaneous speech recognition system based on bigram language model,''
IEICE Technical Report, SP98-4, pp.25--32, Apr. 1998.
{ Spontaneous Speech Recognition, Bigram Language Model, Unknown Word, Filled Pause }

{SP98-5} K. Ishizuka, T. Kawahara and S. Doshita,
``Voice-Controlled Projector using Utterance Verification Model,''
IEICE Technical Report, SP98-5, pp.33--38, Apr. 1998.
{ Speech Recognition, Word Spotting, Utterance Verification }

{SP98-6} K. Tanaka, T. Kawahara and S. Doshita,
``Virtual Fitting Room with Spoken Dialogue Interface,''
IEICE Technical Report, SP98-6, pp.39--46, Apr. 1998.
{ Spoken Language, Spoken Dialogue Interface, Virtual Space, Virtual Fitting Room }

{SP98-7} C. Jo, T. Kawahara, S. Doshita and M. Dantsuji,
``Computer-Aided Pronunciation Learning System based on Classification of the Place and Manner of Articulation,''
IEICE Technical Report, SP98-7, pp.47--54, Apr. 1998.
{ Speech Recognition, CALL, Articulation, HMM, Formant, Segmentation }

{SP98-8} T. Niikawa, M. Matsumura, A. Okuno, T. Tachimura and T. Wada, 
``Estimation of Sound Pressure Distribution and Transfer Characteristics of Tree-dimensional Vocal Tract measured by MRI,''
IEICE Technical Report, SP98-8, pp.1--4, Apr. 1998.
{ Finite Element Method, MRI, 3D Vocal Tract Shape, Sound Pressure Distribution, Transfer Characteristics }

{SP98-9} K. Funaki, Y. Miyanaga and K. Tochinai,
``A Time-Varying Complex Analysis for Analytic Speech Signal,''
IEICE Technical Report, SP98-10, pp.5--12, Apr. 1998.
{ Speech Analysis, Analysis Signal, Complex Signal Processing, Time-Varying Analysis, AR Model }

{SP98-10} K. Tooyama, J. Lu, S. Nakamura, K. Shikano and H. Kawahara,
``Speech Coding using STRAIGHT Speech Analysis/Synthesis Method,''
IEICE Technical Report, SP98-10, pp.13--18, Apr. 1998.
{ Speech Coding, STRAIGHT, LSP, Multi-Split VQ, MA Prediction }

{SP98-11} N. Miyazaki, K. Tokuda, T. Masuko and T. Kobayashi,
``An HMM Based on Multi-Space Probability Distributions and its Application to Pitch Pattern Modeling,''
IEICE Technical Report, SP98-11, pp.19--26, Apr. 1998.
{ Hidden Mrkov Model, Text-to-Speech, Synthesis, Pitch, Multi-Space Probability Distribution }

{SP98-12} N. Miyazaki, K. Tokuda, T. Masuko and T. Kobayashi,
``A Study on Pitch Pattern Generation using HMMS Based on Multi-Space Probability Distributions,''
IEICE Technical Report, SP98-12, pp.27--34, Apr. 1998.
{ Pitch Pattern Generation, Speech Synthesis, Hidden Markov Model, Multi-Space Probability Distribution }

{SP98-13} A. Ichikawa, O. Ashizawa, and Y. Horiuchi,
``Time Structure of a Finger Language Using a Braille Code for Real-time Communication of Deaf Blind People,''
IEICE Technical Report, SP98-13, pp.35--41, Apr. 1998.
{ Deaf-Blind, Finger Language, Prosody, Duration, Phrase, Prominence-Word }

{SP98-14} H. Itoh, T. Sasaki, and A. Kanuma,
``Relationship between the S-shaped Curve and Sound Production With Edentulous Prostheses,''
IEICE Technical Report, SP98-14, pp.1--8, June 1998.
{ Palatograms, Edentulous Subjects, S-Shaped Curve Shape, Individuality }

{SP98-15} H. Itoh, T. Taira, A. Kanuma and H. Imaizumi,
``Speech Sound Testing Word Choices in Evaluating Prostheses Simultaneous - Observation of Sound Production and Lingual and Mandibular Function -,''
IEICE Technical Report, SP98-15, pp.9--16, June 1998.
{ Electro-Palatograms, Test Words for Evaluating Speech, Lingual and Mandibular Simultaneous Observation }

{SP98-16} H. Itoh, S. Murayama, and H. Imaizumi,
``Effect of Maxillary Palate Thickness on Sound Production,''
IEICE Technical Report, SP98-16, pp.17--24, June 1998.
{ Palatorgrams, Thickness of Artificial Maxillary Palates, Diadochokinesis, Perceptual Auditory Impressions }

{SP98-17} H. Itoh, M. Ogata, and K. Chiba,
``Effect of the Wide and Thichness with Palatal Bar on Sound Production - Simultaneous Observation of Electro-Palatography and Speech Sound -,''
IEICE Technical Report, SP98-17, pp.25--32, June 1998.
{ Electro-Palatography, Palatal Bar Dimensions, Sound Production, Simultaneous Observation }

{SP98-18} M. Yamazaki, H. Itoh, and H. Nakahara,
``Speech Production in Varying Glossectomies utilizing Lingually Attached Prostheses,''
IEICE Technical Report, SP98-18, pp.33--40, June 1998.
{ Lingually Attached Prosthesis, Varying Glossectmies, Palatogram, Intelligibility }

{SP98-19} K. Tanahashi, M. Matszaki, T. Kuroda and K. Kido,
``Evalution of speech quality improvement of skeletal class III patients by orthognathic surgery by use of formant locus,''
IEICE Technical Report, SP98-19, pp.41--48, June 1998.
{ Skeketal Class III Patient, Vowel, Spoken Words, Formants, Speech Quality }

{SP98-20} L. Zhou, H. Seki, and K. Kido,
``The Investigation of the Sounds Spectrum with Chinese Retroflex Consonants And it's Identification by Hearing,''
IEICE Technical Report, SP98-20, pp.49--56, June 1998.
{ Chinese Retroflex Consonant, Time-Frequency Analysis, Filter Bank, Identify Examination by Hearing, Correct Answer Rate, Confusion Rate }

{SP98-21} T. Nitta, and T. Inoue,
``Feature-Extraction of Speech Based on Multiple Acoustic-Feature Planes(MAFP),''
IEICE Technical Report, SP98-21, pp.57--64, June 1998.
{ Speech Recognition, Feature Extraction, KL-Transform, Generalized Probabilistic Descent Method }

{SP98-22} Y. Nakatoh, and H. Matsumoto,
``An Evaluation of Mel-LPC Analysis Method in Speech Recognition,''
IEICE Technical Report, SP98-22, pp.65--72, June 1998.
{ Mel-LPC Analysis, Speech Recognition, Mel-Cepstrum, Frequency Warping }

{SP98-23} K. Iwano, and K. Hirose,
``Recognizing Accent Types and Detecting Prosodic Word Boundaries Using Statistical Models of Moraic Transition,''
IEICE Technical Report, SP98-23, pp.1--8, June 1998.
{ Statistical Models of Moraic Trasition, Accent Type Recognition, Prosodic Word Boundary Detection, Prosodic Features }

{SP98-24} Y. Choi,
``Prosodic resolution of syntactic ambiguities: Observation in Japanese, Korean, Mongolian and Turkish,''
IEICE Technical Report, SP98-24, pp.9--16, June 1998.
{ Prosody, Syntactic Boundary, Fo, Pause, Duration }

{SP98-25} S. Nakamura, T. Takiguchi and K. Shikano,
``A Method of Reverberation Compensation based on Short Time Spectral Analysis,''
IEICE Technical Report, SP98-25, pp.17--22, June 1998.
{ Speech Recognition, Real Environment, Hands-Free, Reverberation, Short Time Spectral Analysis }

{SP98-26} H. Jiang, K. Hirose and Q. Huo,
``Combining Viterbi Bayesian Predictive Classifcation with Sequential Bayesian Learning for Robust Speech Recognition,''
IEICE Technical Report, SP98-26, pp.23--30, June 1998.
{ Bayesian Perdictive Classification(BPC), Viterbi BPC(VBPC), Sequential Bayesian Learning, Robust Speech Recognition, Natural Conjugate Prior }

{SP98-27} N. Kitaoka, I. Akahori and S. Nakagawa,
``Speech Recognition under Noisy Environments using Spectral Subtraction with Time Dimensional Smoothing,''
IEICE Technical Report, SP98-27, pp.31--38, June 1998. 
{ Spectral Subtraction, Time Dimensional Smoothing, Phase Difference }

{SP98-28} T. Takiguchi, S. Nakamura, k. Shikano, M. Morishima and T. Isobe,
``Evaluation of Model Adaptation by HMM Decomposition on Telephone Speech Recognition,''
IEICE Technical Report, SP98-28, pp.39--44, June 1998.
{ Telephone Speech Recognition, Wire and Wireless Hand Sets, Model Adaptation, HMM Decomposition } 

{SP98-29} K. Hanai, K. Yamamoto, N. Minematsu and S. Nakagawa,
``Continuous Speech Recognition Using Segmental Unit Input HMMs With A Mixture of Probability Density Functions and Context Dependency,''
IEICE Technical Report, SP98-29, pp.45--52, June 1998.
{ Speech Recognition, HMM, Mixture Distribution, Context Depent Model, Segmental Statistics } 

{SP98-30} T. Otsuki and T. Ohtomo,
``The Property of Asymmetric Segment,''
IEICE Technical Report, SP98-30, pp.53--60, June 1998.
{ Speech Recognition, Feature Vector, Symmetric Segment, Asymmetric Segment, Separation }

{SP98-31} A. Ogawa, K. Takeda and F. Itakura,
``Estimating Entropy of Language from Word Insertion Penalty,''
IEICE Technical Report, SP98-31, pp.61--66, June 1998.
{ Entropy of a Language, Ergodicity, Continuous Speech Recognition }

{SP98-32} J. Ogata and Y. Ariki,
``Comparison of Dictation and Word Spotting Techniques in Classification of News Speech Articles,''
IEICE Technical Report, SP98-32, pp.67--72, June 1998.
{ Dictation, Word Spotting, Keyword Selection, Article Classification }

{SP98-33} K. Takagi, N. Sakurai, A. Iwasaki and S. Furui,
``Language Modeling and Topic Extraction for Broadcast News,''
IEICE Technical Report, SP98-33, pp.73--80, June 1998.
{ Broadcast-News Speech, Language Models, Dictation, Topic-Word Extraction }

{SP98-34} I. Maruyama, Y. Abe, T. Ehara and K. Shirai,
``Study on Word Sequence Pair Spotting for Detecting Time of Superimposing Captions,''
IEICE Technical Report, SP98-34, pp.81--88, June 1998.
{ Detecting Time of Superimposing Captions, Keyword Spotting, HMM, News Speech }

{SP98-35} H. Nishida, T. Shimizu, H. Yoshimura N. Isu and H. Sugata,
``Size of VCV Unit Dictionary and Selection Rule for Speech Synthesis based on LSP Vector VCV Method,''
IEICE Technical Report, SP98-35, pp.1--8, June 1998.
{ Speech Synthesis by Rule, Vector Quanization, LSP Analisis, VCV Nnit }

{SP98-36} T. Shimizu, H. Yoshimura, H. Nishida N. Isu and H. Sugata,
``Speech Synthesis based on LSP-VQ VCV Method,''
IEICE Technical Report, SP98-36, pp.9--16, June 1998.
{ Speech Synthesis by Rule, Vector Quanization, LSP Analysis, VCV Unit }

{SP98-37} K. Matumoto, Y. Yanagi and H. Morikawa,
``An Optimum Code Construction Considering the Frequency and Time Characteristics in the Speech Analysis Synthesis System Based on the Pole-Zero Model,''
IEICE Technical Report, SP98-37, pp.17--24, June 1998.
{ Speech Analysis Synthesis System, Pole-Zero Model, SEARMA Method, Quantization, Interpolation }

{SP98-38} T. Funada and B. Panuthat,
``Sequential Processing for Speech Analysis/Synthesis using AR Model with Unknown Input,''
IEICE Technical Report, SP98-38, pp.25--32, June 1998.
{ AR Model, ARUI Model, Sequential Processing, Analysis/Synthesis of speech }

{SP98-39} M. Iwaki and M. Akagi,
``A Multi-resolution Analysis With Spline of Degree One and its Application to Amplitude Spectrum Analysis of Speech,''
IEICE Technical Report, SP98-39, pp.33--40, June 1998.
{ Multi-Resolution Analysis, Spectrum Analysis, STRAIGHT Speech Analysis-Synthesis System }

{SP98-40} K. Ohga, T. Ohshima and H. Morikawa,
``Analysis of the Development of Consonant Articulation for Preschool and School Age Children,''
IEICE Technical Report, SP98-40, pp.41--48, June 1998.
{ Consonant Articulation, Articulately Development, Misarticulation, SEARMA Method, Duration }

{SP98-41} D. Erickson,
``Jaw Movement and Rhythm in English Dialogues,''
IEICE Technical Report, SP98-41, pp.49--56, June 1998.
{ Jaw, Rhythm, Dialogue, Emphasis, Emotion }

{SP98-42} M. Abe, T. Tsuchida, T. Yoshimoto and S. Ando,
``A Study on Human and Machine Grouping of Two Simultaneous Rhythm Sequences,''
IEICE Technical Report, SP98-42, pp.57--64, June 1998.
{ Auditory Scene Analysis, Tone Sequence, Grouping, Segregation, Rhythm, Voting }

{SP98-43} Y. Hayashi,
``F0 Contour and Recognition of Vocal Expression of Feelings: Using the Interjectory Word "eh",''
IEICE Technical Report, SP98-43, pp.65--72, June 1998.
{ Interjection, F0 Contour, Voice, Recognition of Feelings }

{SP98-44} J. Amano and K. Sekiyama,
``The McGurk effect is influenced by the stimulus set size,''
IEICE Technical Report, SP98-44, pp.73--80, June 1998.
{ Speech perception, McGurk effect, The number of discriminative dimension, The size of stimulus set }

{SP98-45} T. Kitazoe, S. Kim and T. Ichiki,
``Acoustic Speech Recognition Model by Neural Nets Equation with Competition and Cooperation,''
IEICE Technical Report, SP98-45, pp.1--6, June 1998.
{ phoneme recognition, neural nets equation, acoustic model, cooperation and competition }

{SP98-46} Y. Fujisawa, N. Mineatsu and S. Nakagawa,
``Detection of stressed syllables in English words using HMMs and acoustic evaluation of the realized stress,''
IEICE Technical Report, SP98-46, pp.7--14, June 1998.
{ detection of stressed syllables, automatic acoustic evaluation of English word, HMM, spectrum, power, pith, duration }

{SP98-47} K. Kohtoh, S. Taniguchi and T. Koizumi,
``Enhacing the Speaker-Independency of Subword-Unit-Based Isolated Word Recognition,''
IEICE Technical Report, SP98-47, pp.15--22, June 1998.
{ subword, segmentation, concatenated HMMs, multiple HMM scheme, speaker-independency, SCHMM }

{SP98-48} Y. Morita and T. Funada,
``Vector Quantization of LSP Parameters Using Feedfoward Neural Network and Considerind Spectrem Envelope,''
IEICE Technical Report, SP98-48, pp.23--28, June 1998.
{ speech coding, kalman filter, neual network, LSP, vector quantization }

{SP98-49} H. Miyabayashi and T. Funada,
``Study on Superiority of Pitch Periodicity Detection Method Using BPFP and NN,''
IEICE Technical Report, SP98-49, pp.29--36, June 1998.
{ pitch extraction, U/V detection, bank of bandpass filter-pairs, NN }

{SP98-50} T. Tsuzuki and T. Funada,
``Robustness of BPFP mel-cepstrum in noisy speech recognition,''
IEICE Technical Report, SP98-50, pp.37--44, June 1998.
{ BPFP, mel-cepstrum, sylable HMMs }

{SP98-51} N. Kanedera, T. Arai, T. Funada and Y. Yamada,
``On the Robustness of Automatic Speech Recognition Using Multi-resolution Modulation Spectrum,''
IEICE Technical Report, SP98-51, pp.45--52, June 1998.
{ Modulation Spectrum, Robust Automatic Speech Recognition, 2D cepstrum }

{SP98-52} T. Wakako, K. Tokuda, T. Masuko, T. Kobayashi and T. Kitamura,
``Speech Spectral Estimation Based on Expansion of Log Spectrum by Arbitrary Basis Functions and its Application,''
IEICE Technical Report, SP98-52, pp.1--8, Sept. 1998.
{ mel-cepstral parameter, speech analysis-synthesis, speech recognition, spectral estimation }

{SP98-53} K. Zemlok, K. Takeda and F. Itakura,
``Determination of Subband Frequency Components by use of Instantaneous Frequency,''
IEICE Technical Report, SP98-53, pp.9--16, Sept. 1998.
{ Harmonically related structure(HRS), Instantaneous Frequency(IF) }

{SP98-54} K. Yamamoto and S. Nakagawa,
``Compensation of Differences of Sampling Frequency or Front-end for Speech Recognition,''
IEICE Technical Report, SP98-54, pp.17--24, Sept. 1998.
{ speech recognition, sampling frequency, personal computers, frequency characteristics, model compensation }

{SP98-55} F. Itakura,
``A Tutorial on Linear Predictive Analysis of Speech Signal,''
IEICE Technical Report, SP98-55, pp.25--32, Sept. 1998.
{ speech analysis and synthesis, Linear Predictive Coding, LPC, PARCOR, LSP }

{SP98-56} T. Kobayashi,
``Cepstral and Mel-Cepstral Analysis of Speech,''  
IEICE Technical Report, SP98-56, pp.33--40, Sept. 1998.
{ speech analysis, spectral analysis, cepstrum, mel-cepstrum, generalized cepstrum }

{SP98-57} H. Ochi,
``Time-Frequency Analysis and Multirate Signel Processing,''
IEICE Technical Report, SP98-57, pp.41--50, Sept. 1998.
{ multirate signal processing, Wavelet, filter bank, time-frequency analysis }

{SP98-58} J. Okello, S. Arita, Y. Itoh, Y. Fukui and M. Kobayashi,
``Effect of Transfer Level on Convergence of an IIR ADF Implemented Using a Cascade of Allpass and FIR Filter,''
IEICE Technical Report, SP98-58, pp.1--6, Sept. 1998.
{ Adaptive IIR Digital Filter, Allpass Filter, Transfer level }

{SP98-59} T. Kubota, S. Uno and J. Chao,
``Fast Convergent Adaptive Algorithms of Quadratic Volterra FIR Filter''
IEICE Technical Report, SP98-59, pp.7--14, Sept. 1998.
{ Adaptive Filter, Volterra Filter, Convergence Analysis, Fast Adaptive Algorithm }

{SP98-60} J. Lu and Y. Yoshida,
``Recognition of Blurred Images Based on Phase Invariants''
IEICE Technical Report, SP98-60, pp.15--22, Sept. 1998.
{ phase invariant, blurred image, symmetric blur, correlation, image recognition }

{SP98-61} P. Wheeler, S. Kajita, K. Takeda and F. Itakura,
``Blind Deconvolution Using Information Maximization''
IEICE Technical Report, SP98-61, pp.23--30, Sept. 1998.
{ Blind deconvolution, Information maximization, Pre-whitening, LPC analysis }

{SP98-62} J. Sato, M. Kohata, M. Suzuki and S. Makino,
``A New LSP Coding Method Using Ergodic HMM with Exceptional Quantization of Segment Edges.''
IEICE Technical Report, SP98-62, pp.31--38, Sept. 1998.
{ Very low bit speech coding, Ergodic HMM, VQ, pruning }

{SP98-63} J. Hiroi, K. Tokuda, T. Masuko, T. Kobayashi and T. Kitamura,
``Very Low Bit Rate Speech Coding Based on HMMS.''
IEICE Technical Report, SP98-63, pp.39--44, Sept. 1998.
{ hidden Markov model, MLSA filter, speech coding, very low bit rate }

{SP98-64} T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi and T. Kitamura,
``State Duration Modeling for HMM-Based Speech Synthesis.''
IEICE Technical Report, SP98-64, pp.45--50, Sept. 1998.
{ HMM, text-to-speech synthesis, state duration model, mel-cepstrum, context clustering }

{SP98-65} E. Yamanoto, S. Nakamura, and K. Shikano,
``Visual Parameter Estimation form Utterance based on the EM Algorithm using Audio-Visual HMMs.''
IEICE Technical Report, SP98-65, pp.51--56, Sept. 1998.
{ hidden Markov models, EM algorithm, image sequence synthesis, multimodal speech processing, lip synchronization }

{SP98-66} A. Nakayama, J. Lu, S. Nakamura and K. Shikano,
``Digital Watermarks based on Simultaneous and Nonsimultaneous masking.''
IEICE Technical Report, SP98-65, pp.57--62, Sept. 1998.
{ Temporal masking, Simultaneous masking, Audio signals, Digital watermarks }

{SP98-67} M. Ikeda, K. Takeda, and F. Itakura,
``Data Hiding in Music using Band-Limited Random Sequence.''
IEICE Technical Report, SP98-67, pp.63--68, Sept. 1998.
{ Data Hiding, Digital Watermarking, Orthogonal Transform, Random Sequence }

{SP98-68} T. Uchibe, S. Kuroiwa, and N. Higuchi,
``A Study on the Methods of Speaker Verification Using Connected Digits.''
IEICE Technical Report, SP98-68, pp.1--8, Oct. 1998.
{ speaker verification, connected digits, likelihood normalization, telephone }

{SP98-69} H. Kawai, and N. Higuchi,
``Recognition of Connected Digit Speech in Japanese Collected over the Telephone Network.''
IEICE Technical Report, SP98-69, pp.9--14, Oct. 1998.
{ speech recognition, digit, telephone, training data size, sheep and goats }

{SP98-70} 岩 淳子・榎本美香・大谷京子・嶋野 健・土屋 俊,
``日本語地図課題対話における相手話者発話中の発話開始現象について''
IEICE Technical Report, SP98-70, pp.15--22, Oct. 1998.
{  }

{SP98-71} S. Osada, S. Doi and S. Kamemi,
``A dialog system for operating home electric appliances using natural language.''
IEICE Technical Report, SP98-71, pp.23--30, Oct. 1998.
{ natural language, dialog system, speech recognition, world knowledge, user interface }

{SP98-72} H. Manzaki, N. Ueda, M. Yamamoto and S. Itahashi,
``Several Methods for Comparing and Selecting Suitable Japanese Speech Corpus.''
IEICE Technical Report, SP98-72, pp.31--38, Oct. 1998.
{ N-gram, frequency of occurance, entropy, log arithmic likelihood, coverage }

{SP98-73} H. Itoh and S. Yamashita,
``Perturbations of the /s/・/t/・/k/ Japanese Consonant Articulation by Vowel Circumstance Using Electro-Palatogram.''
IEICE Technical Report, SP98-73, pp.39--46, Oct. 1998.
{ Electro-palatogram, /s/・/t/・/k/ Japanese Consonants, Vowel situation, Coarticulatory effect }

{SP98-74} H. Itoh and H. Imaizumi,
``Simultaneous Observation of Respiratory Current, Mandibular Movement, and Electro-palatograms in Japanese Consonant Sound Production.''
IEICE Technical Report, SP98-74, pp.47--54, Oct. 1998.
{ Electro-palatogram, Mandibular movement, Respiratory current, Simultaneous observation }

{SP98-75} H. Itoh and T. Sasaki,
``Articulation Recovery in Post-Operative Cleft Lip, Alveolus and/or Palate Patents Prosthetic Treatment.''
IEICE Technical Report, SP98-75, pp.55--62, Oct. 1998.
{ Palatogram, Post-Operative Cleft Patients, Prosthetic treatment, Articulation recovery }

{SP98-76} H. Itoh and M. Yamazaki,
``Post-Tongue Mobilization Surgery Palatal Shape Related Articulation Recovery in Total Glossectomy Cases.''
IEICE Technical Report, SP98-76, pp.63--70, Oct. 1998.
{ Palatogram, Tongue mobilization surgery, Lingually attached prosthesis, Articulation recovery }

{SP98-77} H. Itoh,
``Relationship between the /s/・/t/・/k/ Consonant Production and Maxillary Palatal Area Shape Formation Using Palatograms -Comparison of Articulation between Glossectomy and Non-glossectomy Cases-.''
IEICE Technical Report, SP98-77, pp.71--78, Oct. 1998.
{ Palatogram, Production of Consonant, Augument Upper Palatal, Tongue ectomy }

{SP98-78} A. Narukawa and M. Yanagida,
``Experimental Inspection of Models of Stuttering.''
IEICE Technical Report, SP98-78, pp.1--8, Nov. 1998.
{ Stuttering, Stammering, Dysphemia, Delayed Auditory Feedback, Speech Disorder }

{SP98-79} C. Jo, T. Kawahara, S. Doshita and M. Dantsuji,
``Japanese Mora Rhythm Learning System  - Rhythm Pattern Template Based on Manner of Articulation -.''
IEICE Technical Report, SP98-79, pp.9--16, Nov. 1998.
{ Speech Processing, Call, HMM, More(e), Tokushuhaku }

{SP98-80} K. Nomura, T. Kawahara and S. Doshita,
``Detection of Sentence Boundaries using Prosodic Information for Automatic Archiving of Lectures.''
IEICE Technical Report, SP98-80, pp.17--24, Nov. 1998.
{ Prosodic Information , Fo, Sentence Boundaries, Automatic Archiving }

{SP98-81} T. Noriyama, H. Ogawa and S. Tenpaku,
``A method to Control Fundamental Frequency to Generate Utterances of Osaka Dialect.''
IEICE Technical Report, SP98-81, pp.25--32, Nov. 1998.
{ Fundamental Frequency , Linguistic and non-linguistic Information, Osaka Dialect, Analysis-by-Synthesis }

{SP98-82} Y. Ishikawa and K. Nakajima,
``Prosody and Synthetic Units for Text-to-Speech Systems for Japanese.''
IEICE Technical Report, SP98-82, pp.33--38, Nov. 1998.
{ Text to Speech Synthesis, Spectral control, Synthetic Unit, Phoneme Model, Prosody, Quality of Synthetic Speech }

{SP98-83} T. Koyama, T. Horie and T. Yoshioka,
``A Highly Intelligible Speech Synthesis for Financial Network System ANSER and its Quality Evaluation.''
IEICE Technical Report, SP98-83, pp.39--46, Nov. 1998.
{ Speech Synthesis, Waveform-Synthesis, Quality Evaluation, Word Intelligibility, Opinion Test }

{SP98-84} N. Campbell,
``Stages of Processing in CHATR Speech Synthesis.''
IEICE Technical Report, SP98-84, pp.47--54, Nov. 1998.
{ Speech Synthesis, Corpus-Based, Spontaneous Speech, Concatenation, Multilingual }

{SP98-85} T. Moriya,
``Speech and Audio Coding Schemes in MPEG-4 Standard.''
IEICE Technical Report, SP98-85, pp.55--60, Nov. 1998.
{ MPEG-4, Speech Coding, Audio Coding, Scaleable Coding, HVXC, CELP, AAC, TwinVQ }

{SP98-86} K. Mano,
``Trends of the ITU-T Speech Coding Standardization.''
IEICE Technical Report, SP98-86, pp.1--6, Nov. 1998.
{ ITU-T, Standardization, Speech Coding, G.711, G.726, G.727, G.722, G.728, G.723.1, G.729 }

{SP98-87} Y. Hiwasaki and K. Mano,
``An LPC Vocoder Based on Pitch Waveform.''
IEICE Technical Report, SP98-87, pp.7--12, Nov. 1998.
{ Speech Coding, LPC Vocoder, Pitch Waveform, Phase Equalization, Phase Randomization }

{SP98-88} A. Jin, T. Moriya, N. Iwakami, T. Norimatsu, M. Tsushima and T. Ishikawa,
``Scalable TwinVQ Audio Coder.''
IEICE Technical Report, SP98-88, pp.13--18, Nov. 1998.
{ Scalable Coding, Transform Coding, Audio Coding, TwinVQ, Hierarchical Coding, MDCT, MPEG-4 }

{SP98-89} T. Nomura, M. Iwadare and N. Tanaka,
``MPEG-4/CELP Speech Coding Algorithm.''
IEICE Technical Report, SP98-89, pp.19--26, Nov. 1998.
{ Speech Coding, MPEG-4, CELP, Scalable Coding }

{SP98-90} M. Nishiguchi, A. Inoue, J. Matsumoto and N. Tanaka,
``MPEG-4 Parametoric Speech Coding - HVXC.''
IEICE Technical Report, SP98-90, pp.27--34, Nov. 1998.
{ Speech Coding, Vector Quantization, MPEG, HVXC, Bit-Rate Scalabilty }

{SP98-91} Y. Lu, Y. Nakajima, A. Yoneyama, H. Yanagihara, M. Sugano and A. Kurematsu,
``Automatic Information Classification of MPEG Audio using Discrimination Function.''
IEICE Technical Report, SP98-91, pp.35--40, Nov. 1998.
{ Audio Indexing, MPEG Coding, Content-based Query, Audio Classification, Subband }

{SP98-92} T. Yonezaki, K. Yoshida, T. Yagi and K. Shikano,
``A HMMs Based Estimation Algorithm for Missing Parameters Quantized with MA Prediction.''
IEICE Technical Report, SP98-92, pp.41--48, Nov. 1998.
{ Speech Coding, Frame Erasure, Parameter Estimation, HMM, MA Prediction, Gain Parameter }

{SP98-93} M. Shigenaga,
``Features of Emotionally Uttered Speech Revealed by Discriminant Analysis(V)  Various Features of Prosodic and Phonemic Variables.''
IEICE Technical Report, SP98-93, pp.49--56, Nov. 1998.
{ Emotion, Prosodic Feature, Phonemic Feature, Discrimination of Emotions, Discriminant  Analysis }

{SP98-94} T. Uchiyama and H. Takahashi,
``Speech Recognition using Recurrent Neural Prediction Model.''
IEICE Technical Report, SP98-94, pp.57--64, Nov. 1998.
{ Recurrent Neural Network, Prediction Model, Digit Recognition }

{SP98-95} Y. Yamashita,
``Estimation of Word Spotting Accuracy Based on Simulation of Speech Recognition.''
IEICE Technical Report, SP98-95, pp.65--70, Nov. 1998.
{ Word Spotting, False Alarm, FOM, Speech Recognition, Score Normalization }

{SP98-96} Y. Nomura and K. Takada,
``On a Program Design of Japanese-English Machine Translation for Making Technical Papers with a sense of Operating English Word Processor.''
IEICE Technical Report, SP98-96, pp.1--8, Dec. 1998.
{ Japanese-English Machine Translation, Technical English CAI, Phrase Translation Procedure }

{SP98-97} H. Yokoyama, S. Omachi and H. Aso,
``Evaluation of Methods to Extract Collocational Information form Corpus for Semantic Clustering of Japanese Polysemous Verbs.''
IEICE Technical Report, SP98-97, pp.9--16, Dec. 1998.
{ Clustering, Polysemous Verb, Collocational Information }

{SP98-98} H. Mizuno, K. Araki and K. Tochinai,
``Evaluation of Word Sense Disambiguation  Method Using Decision Tree Learning Algorithm.''
IEICE Technical Report, SP98-98, pp.17--24, Dec. 1998.
{ Machine Translation, Word Sense Disambiguation, Decision Tree Learning Algorithm, Example, Thesaurus }
 
{SP98-99} Y. Masatomi, K. Araki and K. Tochinai,
``Deep Case Acquisition System by Dialogue with Users.''
IEICE Technical Report, SP98-99, pp.25--32, Dec. 1998.
{ Deep Case, User's Ambiguity, Dialogue, Semantic Analysis }

{SP98-100} S. Matsunaga, S. Matsubara, N. Kawaguchi, K. Toyama and Y. Inagaki,
``Sync/Mail: Reactive Interface Based on Incremental Transfer of Spoken Language.''
IEICE Technical Report, SP98-100, pp.33--40, Dec. 1998.
{ Multimodal, Spken Language, Quick Response, Incremental Interpretation, Information Retrieval, Mail Tool }

{SP98-101} T. Shimizu, T. Ohno and S. Kuroiwa,
``A Study on Sentence-Level Mixture N-gram based on Sentence Clustering.''
IEICE Technical Report, SP98-101, pp.41--48, Dec. 1998.
{ Clustering, Statistical Language Model, Mixture N-gram, Conversational Speech }

{SP98-102} H. Yamamoto and Y. Sagisaka,
``Multi Class Composite N-gram Language Model Based on Connection Direction.''
IEICE Technical Report, SP98-102, pp.49--54, Dec. 1998.
{ Class N-gram, Variable Order N-gram, Automatic Clustering, Joined Word }

{SP98-103} S. Takao, J. Ogata and Y. Ariki,
``Topic Segmentation and Classification to News Speech.''
IEICE Technical Report, SP98-103, pp.55--62, Dec. 1998.
{ Article Classification, Supervised Topic Segmentation, Unsupervised Topic Segmentation }

{SP98-104} S. Tsuge, T. Fukada and H. Singer,
``Speaker Normalized Spectral Subband Parameters for Noise Robust Speech Recognition.''
IEICE Technical Report, SP98-104, pp.63--68, Dec. 1998.
{ Spectral Subband centroids, Noise Environment, Speaker Normalization, Speech Recognition }

{SP98-105} T. Isobe and J. Takahashi,
``A New Normalization Method Using Local Acustic Information of HMMs for Speaker Verification.''
IEICE Technical Report, SP98-105, pp.69--74, Dec. 1998.
{ Speaker Verification, Score Normalization, Local Acoustic Information }

{SP98-106} T. Hitotsumatsu, T. Masuko, T. Kobayashi and K. Tokuda,
``A Study of Imposture on HMM-based Speaker Verification Systems Using Synthetic Speech.''
IEICE Technical Report, SP98-106, pp.75--82, Dec. 1998.
{ Speaker Verification, Security, Speech Synthesis, HMM, Speaker Adaptation, MLLR }

{SP98-107} K. Koike, H. Saito and M. Nakanishi,
``Synthesis of Emotional Speech.''
IEICE Technical Report, SP98-107, pp.83--88, Dec. 1998.
{ Speech Synthesis, Emotion, Prosody }

{SP98-108} K. Ohtuki, S. Furui, N. Sakurai A. Iwasaki and Z. Zhang,
``Language Modeling and Acoustic Modeling for Automatic Transcription of Japanese Broadcast-News Speech.''
IEICE Technical Report, SP98-108, pp.1--8, Dec. 1998.
{ LVCSR, Broadcast-News Speech, N-gram, On-line Speaker Adaptation, Message-Driven Speech Recognition }

{SP98-109} K. Tanaka, T. Kawahara and S. Doshita,
``Domain Independent Platform of Spoken Dialogue Interface.''
IEICE Technical Report, SP98-109, pp.9--16, Dec. 1998.
{ Spoken Dialogue Interface, Information Query, Prototyping }

{SP98-110} A. Lee, T. Kawahara and S. Doshita,
``Large Vocabulary Continuous Speech Recognition Parser based on A* Search using Grammar Category-Pair Constraint.''
IEICE Technical Report, SP98-110, pp.17--24, Dec. 1998.
{ Large Vocabulary, CSR, Parser, FSA, A* search } 

{SP98-111} T. Hori, N. Oka, M. Katoh, A. Ito and M. Kohda,
``A Study on A Phoneme-Graph-based Hypothesis Restriction for Large Vocabulary Continuous Speech Rcognition.''
IEICE Technical Report, SP98-111, pp.25--32, Dec. 1998.
{ Speech Rcognition, LVCSR, Hidden Markov Network, Search Strategy, Phoneme Graph } 

{SP98-112} M. Schuster,
``Evaluation of a Stack Decoder on a Japanese Newspaper Dictation Task.''
IEICE Technical Report, SP98-112, pp.33--40, Dec. 1998.
{ Speech Recognition, Japanese Newspaper Dictation, One-pass Stack Decoder } 

{SP98-113} S. Furui,
``Towards Spoken Language Understanding  - In Anticipation of Significant Research -.''
IEICE Technical Report, SP98-113, pp.41--48, Dec. 1998.
{ Spoken Language, Speech Recognition, Speech Understanding, Speech Summarization, Message-driven Speech Recognition, Robust Speech Recognition } 

{SP98-114} M. Schuster,
``Switchboard Workshop 1998 - Impression and Results.''
IEICE Technical Report, SP98-114, pp.49--54, Dec. 1998.
{ Speech Recognition Workshop }

{SP98-115} M. Takano,
``Robust Acoustic Models by Mixing Distribution with Various Precision.''
IEICE Technical Report, SP98-115, pp.55--62, Dec. 1998.
{ Spontaneous Speech, Context-dependent Model, Context-independent Model, Robustness, Continuous Mixture Density, Precision, Speech Recognition  } 

{SP98-116} T. Nitta and T. Inoue,
``Feature-Extraction Based on Multiple Acoustic-Feature Planes(MAFP) and LDA for Speech Recognition.''
IEICE Technical Report, SP98-116, pp.63--70, Dec. 1998.
{ Speech Recognition, Feature-Extraction, Mapping Operator, Orthognal Basis, Space Derivative Operator, KL-transform, LDA  } 

{SP98-117} M. Tokuhira and Y. Ariki,
``Study on Speech Feature Extraction Based on KLT.''
IEICE Technical Report, SP98-117, pp.71--78, Dec. 1998.
{ Speech Recognition, Feature-Extraction, KL-transform, Dynamic Feature  } 

{SP98-118} T. Arai and A. Kurematsu,
``Variable Length Speech Unit For Continuous Speech Recognition.''
IEICE Technical Report, SP98-118, pp.1--7, Jan. 1999.
{ Continuous Speech Recognition, Variable Length Speech Unit, Coarticulation, Insertion Penalty }

{SP98-119} M. Szarvas and S. Matsunaga,
``Segment-Based Speech Recognition using Acoustic Observation Context,''
IEICE Technical Report, SP98-119, pp.9--16, Jan. 1999.
{ speech recognition, segment model, acoustic observation context }

{SP98-120} K. Asai and S. Ozawa,
``Analysis-Synthesis System of Emotional Voice using Sandglass Neural Network,''
IEICE Technical Report, SP98-120, pp.17--21, Jan. 1999.
{ speech analysis, speech synthesis, emotional information, prosodic information, sandglass NN }

{SP98-121} M. Nakamura, Y. Ohmoto, T. Yamato and K. Takayama,
``Time-Domain Fundamental Frequency Extraction of Speech Signal,''
IEICE Technical Report, SP98-121, pp.23--31, Jan. 1999.
{ pitch period, low-pass filtering, peak to peak value, pseudo 5 vowel }

{SP98-122} H. Sawada, T. Shirakawa, N. Miki and Y. Ogawa,
``Analysis of Spectra with Estimation of Glottal Closure,''
IEICE Technical Report, SP98-122, pp.33--40, Jan. 1999.
{ spectra estimation, glottal closure, BPF to empasize format, EGG }

{SP98-123} T. Ohtani, N. Miki, T. Yokoyama and Y. Ogawa,
``Vocal Tract Transfer Function of vowels in open/close phase,''
IEICE Technical Report, SP98-123, pp.41--48, Jan. 1999.
{ Vocal Traact Transfer Function, Glottal Impedance, Lung Impedance }

{SP-124} T. Mochida and M. Honda,
``A study on estimation of vocal tract area function from impulse response at the lips,''
IEICE Technical Report, SP98-124, pp.49--56, Jan. 1999.
{ speech production, vocal tract area function, lip impulse response, acoustical measurement, nonlinear least-squares method }

{SP-125} T.Yokoyama, N. Miki, Y. Ogawa, S. Masaki, Y. Shimada, I. Fujimoto and Y. Nakamura,
``Effects on Format Frequencies obtained by Acoustic-Tube Modelling of 3-D Vocal Tract Shapes,''
IEICE Technical Report, SP98-125, pp.57--64, Jan. 1999.
{ Speech production, 3-D vocal tract shapes, formant frequencies, and acoustic wave propagation }

{SP-126}  S.Suzuki, T. Okadome, T.Kaburagi and M.Honda,
``Determination of Articulatory Motion from Speech Acoustics by using Articulatory-Acoustic Codebook,''
IEICE Technical Report, SP98-126, pp.65--70, Jan. 1999.
{ articulatory parameter, spectral segment, articulatory-acoustic codebook }

{SP-127} H. Itoh, T. Sasaki, J. Sugawara and H.Imaizumi,
``Lingual Articulation Characteristics in Adult Skeletal Reversed Occlusion with Open-bite,''
IEICE Technical Report, SP98-127, pp.71--78, Jan. 1999.
{ Palatogram, Open-bite, Skeletal Reversed Occlusion, Oral cavity shape, Articulation }

{SP-128} H. Itoh, and M. Yamazaki,
``Long-term Articulation Recovery of Partial Glossectomies with Palatogram-designed Artificial Palates,''
IEICE Technical Report, SP98-128, pp.79--86, Jan. 1999.
{ Palatograms, Partial glossectomy patients, Lingually attached prosthesis, Articulation recovery, Long-term }

{SP98-129} I. Nakamura,
``REPORT OF THE INTERNATIONAL SYMPOSIUM ON MUSICAL ACOUSTICS '98,''
IEICE Technical Report, SP98-129, pp.1--4, Feb. 1999.
{ Musical acoustics, International symposium, Musical instruments, Workshop }

{SP98-130} I. Nakayama, T. Okada and M. Nakagawa,
``Objective evaluation of voice timbre in autophonic production,''
IEICE Technical Report, SP98-130, pp.5--8, Feb. 1999.
{ Autophonic production, Voice timbre, Delayed feedback method, Simulated sound, Evaluation of similarity, Objective evaluation }

{SP98-131} Tian Da-cheng,
``Researches on Nasal Vowel Articulation of French Songs,''
IEICE Technical Report, SP98-131, pp.9--16, Feb. 1999.
{ Nasal Vowel French Songs singing formant Articulation Singing }

{SP98-132} H. Murakami,
``The Neural Network model in case of music evaluation,''
IEICE Technical Report, SP98-132, pp.17--24, Feb. 1999.
{ Neural network, Image }

{SP98-133} H. Sekiguchi and S. Eiho,
``Generating the Human Piano Performance in Virtual Space,''
IEICE Technical Report, SP98-133, pp.25--32, Feb. 1999.
{ 3-D animation, human movements, hand modeling, piano performance }

{SP98-134} H. Okamoto,
``A Development of Electronic Musical Instrument Limber-Row,''
IEICE Technical Report, SP98-134, pp.33--38, Feb. 1999.
{ Limber-Row, Composer, Electronic musical instrument, Electronic music, Computer music, MIDI }

{SP98-135} M. Sawaguchi, A. Fukada and Y. Takahashi,
``5.1 Multichanel Sound Recording/Production Technique,''
IEICE Technical Report, SP98-135, pp.39--47, Feb. 1999.
{ Multichannel Surround, Recording Technique, 5.1channel }

{SP98-136} T. Kinoshita, S. Sakai and H. Tanaka,
``Musical source identification based on frequency component features,''
IEICE Technical Report, SP98-136, pp.1--6, Feb. 1999.
{ Music scene analysis, Sound source identification, Auditory scene analysis, Frequency component feature }

{SP98-137} F. Nishida, T. Suzuki, T. Tokunaga and H. Tanaka,
``An accompaniment system using performance expression generated from an example-based approach,''
IEICE Technical Report, SP98-137, pp.7--14, Feb. 1999.
{ example-based automatic generation of performance expressive accompaniment system ensemble }

{SP98-138} T. Miwa, Y. Tadokoro and T. Saito,
``The musical nstruments estimation of a real musical sound using comb filters for transcription,''
IEICE Technical Report, SP98-138, pp.15--22, Feb. 1999.
{ transcription, pitch estimation, musical instruments estimation, adaptive comb filters }

{SP98-139} Y. Kumagai, K. Yoshida and J. Miwa,
``On a Decision Method of Accent Type for Japanese Learning,''
IEICE Technical Report, SP98-139, pp.23--30, Feb. 1999.
{ Language Education, Japanese Speech, Accent }

{SP98-140} M. Sasaki, T. Hirano and J. Miwa,
``Dynamic 3-Dimensional Visualization of Vocal Tract Shape for Speech Learning System,''
IEICE Technical Report, SP98-140, pp.31--38, Feb. 1999.
{ Computer assisted language, Vocal tract, 3D visualization, Articulation model, Dynamic visualization, Estimation of articulation }

{SP98-141} M. Sugiyama,
``Fast Segment Search Algorithms,''
IEICE Technical Report, SP98-141, pp.39--45, Feb. 1999.
{ Multimedia processing, Acoustic segment, Searching methods }

{SP98-142} J. Sasaki, M. Hiroshige, Y. Miyanaga and K. Tochinami,
``A simple method of phoneme segmentation for detection of local speech rate variation and its application to speech data recorded in real environment,''
IEICE Technical Report, SP98-142, pp.47--54, Feb. 1999.
{ Speech rate variation, Prosodic information, Phoneme segmentation, Speech data in real environment }

{SP98-143} S. Hayashi, S. Kurihara and A. Kataoka,
``Extensions to ITU-T Recommendation G.729 and its quality assessment,''
IEICE Technical Report, SP98-143, pp.55--62, Feb. 1999.
{ CS-ACELP, Speech coding, Subjective assessment, Vector Gain Quantization, DCR }

{SP98-144} N. Harada and H. Ohmuro,
``A 5-kHz-bandwidth Low-Bit-Rate Speech Coder,''
IEICE Technical Report, SP98-144, pp.63--68, Feb. 1999.
{ speech coder, 5kHz-bandwidth, low bit rate }

{SP98-145} H. Ohmuro and K. Mano,
``Improvement of low-bit-rate speech coding under background noise conditions,''
IEICE Technical Report, SP98-145, pp.69--74, Feb. 1999.
{ speech coding, low bit rate, 4kbit/s, background noise, coding model, subjective quality }

{SP98-146} H. Itoh, T. Sasaki, H. Nakahara and S. Yamashita,
``/S/Sound Production in Post-Operative Cleft Lip And/or Palate Subjects with Normal Perceptional Impression,''
IEICE Technical Report, SP98-146, pp.1--8, Mar. 1999.
{ Palatogram, /S/ consonants, Cleft Lip/Palate Patients, Oral cavity shape }

{SP98-147} S. Katsuki, A. Yoshitaka, M. Hirakawa and T. Ichikawa,
``Supporting Speech Diagnosis Based on Lips and Lower Jaw Motion Analysis and Phonetic Properties,''
IEICE Technical Report, SP98-147, pp.9--16, Mar. 1999.
{ Dysarthria, Upper and Lower Motion, Phonetic Properties, Temporal Correlation }

{SP98-148} H. Ishiguro, H. Miura and G. Ohyama,
``Evaluation of Hoarseness Using Cepstrum Analysis,''
IEICE Technical Report, SP98-148, pp.17--24, Mar. 1999.
{ hoarseness, cepstrum, evaluation, perturbation, objective analysis, voice }

{SP98-149} H. Saito, N. Suzuki, T. Kitamura M. Akagi and K. Michi,
``Acoustic features after tongue and mouth floor resection,''
IEICE Technical Report, SP98-149, pp.25--31, Mar. 1999.
{ tongue and mouth floor resection, articulation disorders, acoustic features }

{SP98-150} N. Hara, K. Matsui, K.Kubota M. Jin and I. Ohira,
``The esophageal speech aid system,''
IEICE Technical Report, SP98-150, pp.33--40, Mar. 1999.
{ Esophageal speech, speech aid, formant analysis-resynthesis, voice source }

{SP98-151} J. LU, S. Nakamura and K. Shikano,
``Study on Enhancement of Esophageal Speech,''
IEICE Technical Report, SP98-151, pp.41--46, Mar. 1999. 
{ Esphageal speech, Enhancement, Source information, Inverse filter }

{SP98-152} N. Uemi, M. Hashiba, Y. Sugai Y. Yamaguchi and T. Ifukube,
``Practical use of electrolarynx with a pitch control function and its evaluation by laryngectomees,''
IEICE Technical Report, SP98-152, pp.47--52, Mar. 1999.
{ Speech, Electrolarynx, Expiration pressure, Pitch frequency, Substitute speech method }

{SP98-153} Y. watanabe,
``Consideration of speech processing in aural speech impaired person's standpoint,''
IEICE Technical Report, SP98-153, pp.53--59, Mar. 1999.
{ Aural trouble, difficulty in speech, Signlanguage, voice recognition, voice synthesis, communications }

{SP98-154} Y. Fukuda, H. Akahori, K. Fukushima and K. Suzuki,
``A Study on the usage of Words in the Dialogue between Deaf Persons through the Japanese Sign Language,''
IEICE Technical Report, SP98-154, pp.1--6, Mar. 1999.
{ Japanese Sign Language, Basic words, Word usage, Facial expression Mouth shape }

{SP98-155} K. Saito, N. Uemi, H. Shoji and T. Ifukube,
``Basic Research of Man-Man Interface Using Speech Recognition Technology for Acquired Deaf,''
IEICE Technical Report, SP98-155, pp.7--14, Mar. 1999.
{ Aid for Hearing Impairments, Speech Recognition, Meaning Comprehension and Morphological Analysis }

{SP98-156} K. Ishida, T. Moriyama and S. Ozawa,
``A Study on Articulation Control of Synthetic Speech,''
IEICE Technical Report, SP98-156, pp.15--22, Mar. 1999.
{ speech synthesis, emotion, PARCOR coefficient, vocal tract area, principal component analysys }

{SP98-157} N. Ono, M. Abe and S. Ando,
``Signal Analysis Based on Harmonic Interference:Computational Model of Auditory AM-FM Sensation and its applications,''
IEICE Technical Report, SP98-157, pp.23--30, Mar. 1999.
{ AM-FM, beat, harmonic tone, subband, zeros, logarithmic differential decomposition }

{SP98-158} M. Unoki and M. Akagi,
``A model of the problem of segregating two acoustic sources based on auditory scene analysis,''
IEICE Technical Report, SP98-158, pp.31--38, Mar. 1999.
{ auditory scene analysis, Bregman's regularities, segregation problem, segregation model }

{SP98-159} N. Suzuki and M. Akagi,
``Perception of Speaker Individuality embedded in Sentence Utterance,''
IEICE Technical Report, SP98-159, pp.39--46, Mar. 1999.
{ speaker individuality, STRAIGHT, temporal decomposition }

{SP98-160} H. Itoh, S. Murayama, H. Nakahara and S. Imaizumi,
``Individual Characteritics of Speech Articulation Development,''
IEICE Technical Report, SP98-160, pp.47--54, Mar. 1999.
{ Palatogram, Dental A ges II A ・Children, Oral Cavity Shape, Perceptional Impression }

{SP98-161} H. Shimizu and J. Yamamoto,
``Teaching auditory-visual matching in child with developmental disabilities: the effect of stimulus-control shaping programs,''
IEICE Technical Report, SP98-161, pp.55--62, Mar. 1999.
{ matching-to-sample, stimulus-control shaping programs, blocked-trial procedure, computer-based-teaching, behavior analysis, children with developmental disabilities }

{SP98-162} Y. Tamekawa, S. Imaizumi, R. Mitomo, T.Deguchi, H. Nakahara, H. Ito and P. Keating,
``Articulatory interaction between native and non-native phonemes: Palatographic Analyses,''
IEICE Technical Report, SP98-162, pp.63--68, Mar. 1999.
{ non-native phonetic contrasts, audiovisual training, palatograph }

{SP98-163} R. Hayashi, Y. Tamekawa, S. Imaizumi, R. Uchida, H. Seki and K. Mori
``Neural processes of audio-visual speech perception: A MEG study,''
IEICE Technical Report, SP98-163, pp.69--75, Mar. 1999.
{ words MEG, Audio-visual speech perception, phonemic discrimination, Current dipole, Neural processes }

{SP98-164} K. Mori, T. Toyama, M. Mitsui, S. Imaizumi, Y. Shimura and Y. Nkajima,
``Responses of Human Auditory Cortices Measured with fMRI,''
IEICE Technical Report, SP98-164, pp.77--84, Mar. 1999.
{ fMRI, Heschl gyrus, canal earphone, human primary auditory cortex, noise, planum temporale }

{SP98-165} H. Fujioka and A. Utsumi,
``A Computational Account for the Neural Mechanisms of Conduction Aphasia,''
IEICE Technical Report, SP98-165, pp.85--92, Mar. 1999.
{ conduction aphasia, cortex, fiber bundle, phonological working memory, computational approach, amplitude-specificity }