Presentation 2021-12-01
Block Sparse MLP-based Vision DNN Accelerators on Embedded FPGAs
Akira Jinguji, Hiroki Nakahara,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Since the advent of Vision Transformer, a deep learning model for image recognition without Convolution, MLP-based models have attracted a lot of attention as an alternative to CNNs. MLP-based achieve high recognition accuracy in image recognition despite the lack of Convolution. Research on MLP-based, such as MLP-Mixer and gMLP, achieves high recognition accuracy with a simpler structure. For low-latency inference, the computational efficiency is reduced on GPUs due to the small amount of computational data per processing unit, and dedicated circuits such as FPGAs are considered to be more suitable. In this paper, we propose an FPGA circuit for an inference accelerator for MLP-based model, where we focus on efficiently computing the matrix product with high parallelism, since the simple matrix product in the MLP layer accounts for most of the computation in MLP-based models. We have designed a circuit that computes the product of two large matrices in one cycle by designing a bit-wide GEMM and a dedicated instruction set. In this paper, we implemented the proposed circuit on a Xilinx ZCU102 FPGA board and performed inference on the gMLP-S model. experimental results for class classification on the ImageNet dataset show that our implementation has a recognition accuracy of 74.5%, an inference speed of 159.0FPS and 6.3ms, and a power consumption of 24.9W. Compared to a mobile GPU, the proposed implementation is 4.4 times faster and 6.1 times more power efficient; compared to an existing FPGA implementation of the CNN model, our implementation has over 3% higher recognition accuracy with comparable inference speed.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) DNN / Vision Transformer / MLP / gMLP / FPGA
Paper # VLD2021-21,ICD2021-31,DC2021-27,RECONF2021-29
Date of Issue 2021-11-24 (VLD, ICD, DC, RECONF)

Conference Information
Committee VLD / DC / RECONF / ICD / IPSJ-SLDM
Conference Date 2021/12/1(2days)
Place (in Japanese) (See Japanese page)
Place (in English) Online
Topics (in Japanese) (See Japanese page)
Topics (in English) Design Gaia 2021 -New Field of VLSI Design-
Chair Kazutoshi Kobayashi(Kyoto Inst. of Tech.) / Hiroshi Takahashi(Ehime Univ.) / Kentaro Sano(RIKEN) / Masafumi Takahashi(Kioxia) / Yuichi Nakamura(NEC)
Vice Chair Minako Ikeda(NTT) / Tatsuhiro Tsuchiya(Osaka Univ.) / Yoshiki Yamaguchi(Tsukuba Univ.) / Tomonori Izumi(Ritsumeikan Univ.) / Makoto Ikeda(Univ. of Tokyo)
Secretary Minako Ikeda(Osaka Univ.) / Tatsuhiro Tsuchiya(NEC) / Yoshiki Yamaguchi(Nihon Univ.) / Tomonori Izumi(Chiba Univ.) / Makoto Ikeda(NEC) / (Tokyo Inst. of Tech.)
Assistant / / Yukitaka Takemura(INTEL) / Yasunori Osana(Ryukyu Univ.) / Kosuke Miyaji(Shinshu Univ.) / Yoshiaki Yoshihara(キオクシア) / Takeshi Kuboki(Kyushu Univ.)

Paper Information
Registration To Technical Committee on VLSI Design Technologies / Technical Committee on Dependable Computing / Technical Committee on Reconfigurable Systems / Technical Committee on Integrated Circuits and Devices / Special Interest Group on System and LSI Design Methodology
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Block Sparse MLP-based Vision DNN Accelerators on Embedded FPGAs
Sub Title (in English)
Keyword(1) DNN
Keyword(2) Vision Transformer
Keyword(3) MLP
Keyword(4) gMLP
Keyword(5) FPGA
1st Author's Name Akira Jinguji
1st Author's Affiliation Tokyo Institute of Technology(Tokyo Tech)
2nd Author's Name Hiroki Nakahara
2nd Author's Affiliation Tokyo Institute of Technology(Tokyo Tech)
Date 2021-12-01
Paper # VLD2021-21,ICD2021-31,DC2021-27,RECONF2021-29
Volume (vol) vol.121
Number (no) VLD-277,ICD-278,DC-279,RECONF-280
Page pp.pp.25-30(VLD), pp.25-30(ICD), pp.25-30(DC), pp.25-30(RECONF),
#Pages 6
Date of Issue 2021-11-24 (VLD, ICD, DC, RECONF)