Paper Abstract and Keywords |
Presentation |
2020-03-02 15:45
Performance evaluation of distilling knowledge using encoder-decoder for CTC-based automatic speech recognition systems Takafumi Moriya, Hiroshi Sato, Tomohiro Tanaka, Takanori Ashihara, Ryo Masumura, Yusuke Shinohara (NTT) EA2019-131 SIP2019-133 SP2019-80 |
Abstract |
(in Japanese) |
(See Japanese page) |
(in English) |
We present a novel training approach for connectionist temporal classification (CTC) -based automatic speech recognition (ASR) systems. CTC models are promising for building both the conventional acoustic model and the end-to-end (E2E) ASR model. However, CTC models make it difficult to capture the correct timing of each output label because timing is not given explicitly in the training data. In this paper, we propose a new auxiliary task with frame-wise targets for CTC model enhancement. We utilize attention weights generated by an attention-based encoder-decoder model (S2S) for making the targets, called the attention matrix. The attention matrix is the sum of the products of the attention weights (spike timing information) and the corresponding target vectors (probability information), and used for S2S-to-CTC knowledge distillation loss computation. Therefore, the attention matrix makes the CTC models jointly trainable as regards spike timings and their posteriors. Experiments on Japanese ASR tasks demonstrate that our proposal is effective for CTC model training; it achieves a 10.2% (E2E) / 9.4% (acoustic model) relative reduction in the character/kana-syllable error rates compared to models trained using only CTC loss. |
Keyword |
(in Japanese) |
(See Japanese page) |
(in English) |
automatic speech recognition / neural network / connectionist temporal classification / attention weight / knowledge distillation / / / |
Reference Info. |
IEICE Tech. Rep., vol. 119, no. 441, SP2019-80, pp. 175-180, March 2020. |
Paper # |
SP2019-80 |
Date of Issue |
2020-02-24 (EA, SIP, SP) |
ISSN |
Print edition: ISSN 0913-5685 Online edition: ISSN 2432-6380 |
Copyright and reproduction |
All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
Download PDF |
EA2019-131 SIP2019-133 SP2019-80 |
|