Presentation 2020-01-22
Many Universal Convolution Cores for Ensemble Sparse Convolutional Neural Networks
Ryosuke Kuramochi, Youki Sada, Masayuki Shimoda, Shimpei Sato, Hiroki Nakahara,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) A convolutional neural network (CNN) is one of the most successful neural networks and widely used for computer vision tasks. However, it requires a massive number of multiplication and accumulation (MAC) computa- tions with high-power consumption, and higher recognition accuracy is desired for modern tasks. In the paper, we apply a sparseness technique to generate a weak classifier to build an ensemble CNN. We control sparse (zero weight) ratio to make an excellent performance and better recognition accuracy. We propose a universal convolution core to realize variations of modern convolutional operations, and extend it to many cores with pipelining architecture to achieve high-throughput operation. By setting the sparsity ratio and the number of predictors appropriately, high-speed architectures are realized on the many universal convolution cores while the recognition accuracy is improved compared to the conventional single CNN realization. We implemented the prototype of many universal convolution cores on the Xilinx Kintex UltraScale+ FPGA, and compared with the desktop GPU realization, it is 3.09 times faster, 4.20 times lower power, and 13.33 times better as for the performance per power.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Deep Learning / CNN / FPGA / Ensemble Learning
Paper # VLD2019-65,CPSY2019-63,RECONF2019-55
Date of Issue 2020-01-15 (VLD, CPSY, RECONF)

Conference Information
Committee IPSJ-SLDM / RECONF / VLD / CPSY / IPSJ-ARC
Conference Date 2020/1/22(3days)
Place (in Japanese) (See Japanese page)
Place (in English) Raiosha, Hiyoshi Campus, Keio University
Topics (in Japanese) (See Japanese page)
Topics (in English) FPGA Applications, etc.
Chair Yutaka Tamiya(Fujitsu Lab.) / Yuichiro Shibata(Nagasaki Univ.) / Nozomu Togawa(Waseda Univ.) / Hidetsugu Irie(Univ. of Tokyo) / Hiroshi Inoue(Kyushu Univ.)
Vice Chair / Kentaro Sano(RIKEN) / Yoshiki Yamaguchi(Tsukuba Univ.) / Daisuke Fukuda(Fujitsu Labs.) / Michihiro Koibuchi(NII) / Kota Nakajima(Fujitsu Lab.)
Secretary (Univ. Shiga Prefecture) / Kentaro Sano(NTT) / Yoshiki Yamaguchi(Mitsubishi Electric) / Daisuke Fukuda(Hiroshima City Univ.) / Michihiro Koibuchi(e-trees.Japan) / Kota Nakajima(Univ. of Aizu) / (Hitachi)
Assistant / Yuuki Kobayashi(NEC) / Hiroki Nakahara(Tokyo Inst. of Tech.) / Kazuki Ikeda(Hitachi) / Eiji Arima(Univ. of Tokyo) / Shugo Ogawa(Hitachi)

Paper Information
Registration To Special Interest Group on System and LSI Design Methodology / Technical Committee on Reconfigurable Systems / Technical Committee on VLSI Design Technologies / Technical Committee on Computer Systems / Special Interest Group on System Architecture
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Many Universal Convolution Cores for Ensemble Sparse Convolutional Neural Networks
Sub Title (in English)
Keyword(1) Deep Learning
Keyword(2) CNN
Keyword(3) FPGA
Keyword(4) Ensemble Learning
1st Author's Name Ryosuke Kuramochi
1st Author's Affiliation Tokyo Institute of Technology(Titech)
2nd Author's Name Youki Sada
2nd Author's Affiliation Tokyo Institute of Technology(Titech)
3rd Author's Name Masayuki Shimoda
3rd Author's Affiliation Tokyo Institute of Technology(Titech)
4th Author's Name Shimpei Sato
4th Author's Affiliation Tokyo Institute of Technology(Titech)
5th Author's Name Hiroki Nakahara
5th Author's Affiliation Tokyo Institute of Technology(Titech)
Date 2020-01-22
Paper # VLD2019-65,CPSY2019-63,RECONF2019-55
Volume (vol) vol.119
Number (no) VLD-371,CPSY-372,RECONF-373
Page pp.pp.67-72(VLD), pp.67-72(CPSY), pp.67-72(RECONF),
#Pages 6
Date of Issue 2020-01-15 (VLD, CPSY, RECONF)