Presentation 2019-03-06
Efficient Learning for Distillation of DNN by Self Distillation
Jumpei Takagi, Motonobu Hattori,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Knowledge distillation is a method to create a superior student by using knowledge obtained from a trained teacher neural network. Recent studies have shown that much superior students can be obtained by distilling the trained student further as a teacher. Distilling the knowledge through multiple generations, however, takes a long time for learning. In this paper, we propose a self distillation method which can reduce both the number of generations and learning time for knowledge distillation. In self distillation, the most accurate network is obtained during intra-generation learning, and it is used as a teacher of intra-generational distillation. Our experiments for image classification task demonstrate that the proposed self distillation acquires high accuracy with fewer generations and less learning time than the conventional method.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Knowledge Distillation / Self Distillation / Deep Learning / Image Classification
Paper # NC2018-83
Date of Issue 2019-02-25 (NC)

Conference Information
Committee NC / MBE
Conference Date 2019/3/4(3days)
Place (in Japanese) (See Japanese page)
Place (in English) University of Electro Communications
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair Yutaka Hirata(Chubu Univ.) / Masaki Kyoso(TCU)
Vice Chair Hayaru Shouno(UEC) / Taishin Nomura(Osaka Univ.)
Secretary Hayaru Shouno(Nagoya Univ.) / Taishin Nomura(NAIST)
Assistant Keiichiro Inagaki(Chubu Univ.) / Takashi Shinozaki(NICT) / Takumi Kobayashi(YNU) / Yasuyuki Suzuki(Osaka Univ.)

Paper Information
Registration To Technical Committee on Neurocomputing / Technical Committee on ME and Bio Cybernetics
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Efficient Learning for Distillation of DNN by Self Distillation
Sub Title (in English)
Keyword(1) Knowledge Distillation
Keyword(2) Self Distillation
Keyword(3) Deep Learning
Keyword(4) Image Classification
1st Author's Name Jumpei Takagi
1st Author's Affiliation University of Yamanashi(Univ of Yamanashi)
2nd Author's Name Motonobu Hattori
2nd Author's Affiliation University of Yamanashi(Univ of Yamanashi)
Date 2019-03-06
Paper # NC2018-83
Volume (vol) vol.118
Number (no) NC-470
Page pp.pp.209-214(NC),
#Pages 6
Date of Issue 2019-02-25 (NC)