Presentation 2022-03-07
[Memorial Lecture] DistriHD: A Memory Efficient Distributed Binary Hyperdimensional Computing Architecture for Image Classification
Dehua Liang, Jun Shiomi, Noriyuki Miura, Hiromitsu Awano,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Hyper-Dimensional (HD) computing is a brain-inspired learning approach for efficient and fast learning on today’s embedded devices. HD computing first encodes all data points to high-dimensional vectors called hypervectors and then efficiently performs the classification task using a well-defined set of operations. Although HD computing achieved reasonable performances in several practical tasks, it comes with huge memory requirements since the data point should be stored in a very long vector having thousands of bits. To alleviate this problem, we propose a novel HD computing architecture, called DistriHD which enables HD computing to be trained and tested using binary hypervectors and achieves high accuracy in single-pass training mode with significantly low hardware resources. DistriHD encodes data points to distributed binary hypervectors and eliminates the expensive item memory in the encoder, which significantly reduces the required hardware cost for inference. Our evaluation also shows that our model can achieve a 27.6× reduction in memory cost without hurting the classification accuracy. The hardware implementation also demonstrates that DistriHD achieves over 9.9× and 28.8× reduction in area and power, respectively.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Brain-inspired ComputingHyper-Dimensional ComputingMemory-EfficiencyDistributed System
Paper # VLD2021-84,HWS2021-61
Date of Issue 2022-02-28 (VLD, HWS)

Conference Information
Committee VLD / HWS
Conference Date 2022/3/7(2days)
Place (in Japanese) (See Japanese page)
Place (in English) Online
Topics (in Japanese) (See Japanese page)
Topics (in English) Design Technology for System-on-Silicon, Hardware Security, etc.
Chair Kazutoshi Kobayashi(Kyoto Inst. of Tech.) / Yasuhisa Shimazaki(Renesas Electronics)
Vice Chair Minako Ikeda(NTT) / Makoto Nagata(Kobe Univ.) / Daisuke Suzuki(Mitsubishi Electric)
Secretary Minako Ikeda(Osaka Univ.) / Makoto Nagata(NEC) / Daisuke Suzuki(NTT)
Assistant

Paper Information
Registration To Technical Committee on VLSI Design Technologies / Technical Committee on Hardware Security
Language ENG
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) [Memorial Lecture] DistriHD: A Memory Efficient Distributed Binary Hyperdimensional Computing Architecture for Image Classification
Sub Title (in English)
Keyword(1) Brain-inspired ComputingHyper-Dimensional ComputingMemory-EfficiencyDistributed System
1st Author's Name Dehua Liang
1st Author's Affiliation Osaka University(Osaka Univ.)
2nd Author's Name Jun Shiomi
2nd Author's Affiliation Osaka University(Osaka Univ.)
3rd Author's Name Noriyuki Miura
3rd Author's Affiliation Osaka University(Osaka Univ.)
4th Author's Name Hiromitsu Awano
4th Author's Affiliation Kyoto University(Kyoto Univ.)
Date 2022-03-07
Paper # VLD2021-84,HWS2021-61
Volume (vol) vol.121
Number (no) VLD-412,HWS-413
Page pp.pp.44-44(VLD), pp.44-44(HWS),
#Pages 1
Date of Issue 2022-02-28 (VLD, HWS)