Presentation | 2022-01-24 Accelerating Deep Neural Networks on Edge Devices by Knowledge Distillation and Layer Pruning Yuki Ichikawa, Akira Jinguji, Ryosuke Kuramochi, Hiroki Nakahara, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | A deep neural network (DNN) is computationally expensive, making it challenging to run DNN on edge devices. Therefore, model compression techniques such as knowledge distillation and pruning are proposed. This research suggests an efficient method to compress pretrained models using these techniques. We show that our method can compress models for edge devices in a short time. We also show a trade--off between recognition accuracy and inference time on Jetson Nano GPU and DPU on a Xilinx FPGA. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | Knowledge Distillation / Layer Pruning / Deep Neural Network / Edge Device |
Paper # | VLD2021-58,CPSY2021-27,RECONF2021-66 |
Date of Issue | 2022-01-17 (VLD, CPSY, RECONF) |
Conference Information | |
Committee | RECONF / VLD / CPSY / IPSJ-ARC / IPSJ-SLDM |
---|---|
Conference Date | 2022/1/24(2days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | Online |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | FPGA Applications, etc. |
Chair | Kentaro Sano(RIKEN) / Kazutoshi Kobayashi(Kyoto Inst. of Tech.) / Michihiro Koibuchi(NII) / Hiroshi Inoue(Kyushu Univ.) / Yuichi Nakamura(NEC) |
Vice Chair | Yoshiki Yamaguchi(Tsukuba Univ.) / Tomonori Izumi(Ritsumeikan Univ.) / Minako Ikeda(NTT) / Kota Nakajima(Fujitsu Lab.) / Tomoaki Tsumura(Nagoya Inst. of Tech.) |
Secretary | Yoshiki Yamaguchi(NEC) / Tomonori Izumi(Tokyo Inst. of Tech.) / Minako Ikeda(Osaka Univ.) / Kota Nakajima(NEC) / Tomoaki Tsumura(JAIST) / (Hitachi) / (Univ. of Tokyo) |
Assistant | Yukitaka Takemura(INTEL) / Yasunori Osana(Ryukyu Univ.) / / Ryohei Kobayashi(Tsukuba Univ.) / Takaaki Miyajima(Meiji Univ.) |
Paper Information | |
Registration To | Technical Committee on Reconfigurable Systems / Technical Committee on VLSI Design Technologies / Technical Committee on Computer Systems / Special Interest Group on System Architecture / Special Interest Group on System and LSI Design Methodology |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | Accelerating Deep Neural Networks on Edge Devices by Knowledge Distillation and Layer Pruning |
Sub Title (in English) | |
Keyword(1) | Knowledge Distillation |
Keyword(2) | Layer Pruning |
Keyword(3) | Deep Neural Network |
Keyword(4) | Edge Device |
1st Author's Name | Yuki Ichikawa |
1st Author's Affiliation | Tokyo Institute of Technology(Titech) |
2nd Author's Name | Akira Jinguji |
2nd Author's Affiliation | Tokyo Institute of Technology(Titech) |
3rd Author's Name | Ryosuke Kuramochi |
3rd Author's Affiliation | Tokyo Institute of Technology(Titech) |
4th Author's Name | Hiroki Nakahara |
4th Author's Affiliation | Tokyo Institute of Technology(Titech) |
Date | 2022-01-24 |
Paper # | VLD2021-58,CPSY2021-27,RECONF2021-66 |
Volume (vol) | vol.121 |
Number (no) | VLD-342,CPSY-343,RECONF-344 |
Page | pp.pp.49-54(VLD), pp.49-54(CPSY), pp.49-54(RECONF), |
#Pages | 6 |
Date of Issue | 2022-01-17 (VLD, CPSY, RECONF) |