Presentation 2020-03-06
Modular Reservoir Network for Pattern Recognition
Yifan Dai, Masao Sakuraba, Shigeo Sato,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) This work is based on liquid state machine (LSM) [1], which is a reservoir network [2] that yields deep relationship with neurophysiology and has possessed great success in time series processing. However, compared with popular feedforward artificial neural networks, the simulation cost for LSM involving spiking neuron is high, which limits the size and functionality of LSM. For better application of LSM, the purpose of this work is to 1) scale up the reservoir efficiently with low computational cost and 2) boost up the functionality by changing topology and synapses of the network. Firstly, we introduce the modular structure in which the synapses of neurons in different modules form directed-acyclic-graph while the neurons in the same modules are connected recurrently. Under this topology, a divide-and-conquer based algorithm could be applied to reduce the computational complexity dramatically. Secondly, we integrate the study on visual cortex [3], in which a Hough transform based convincible model of motion detection is invented, to improve the performance of reservoir network in pattern recognition. We use specifically designed input synapses with which the response of post neurons could perform the Hough transform with no extra costs. Experimentally, we proved that such assignment could improve the performance a lot compared with the case of random connected input synapses. Numerical experiment is performed with MNIST dataset in which the images are converted into spiking sequences by Poisson encoding. The proposed structure could achieve highest precision against previously reported networks of similar size with around 1200 neurons (small-world, random and network proposed in [4] with probability of connections decayed by distance). As for the readout function, both SVM and linear map are implemented, in which the SVM readout performs better than linear map most of the time because of the overfitting in linear map. Moreover, the modular based proposed method shows great system robustness. As indicated in [5], even though the reservoir network could resist the input noise, slight system noise (randomly disable some synapses) would deteriorate the performance a lot. We tested the proposed network under system noise from 0.1% to 20%, in all level of noise, the proposed structure shows significant improvement from previously reported network. Besides, we also find that the proposed reservoir network suffers less performance loss compared with CNN when the labeled training data is not enough, which could have many potential applications.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) LSMReservoir computingModularPattern recognitionPattern recognitionHough transformVisual cortex
Paper # NC2019-110
Date of Issue 2020-02-26 (NC)

Conference Information
Committee NC / MBE
Conference Date 2020/3/4(3days)
Place (in Japanese) (See Japanese page)
Place (in English) University of Electro Communications
Topics (in Japanese) (See Japanese page)
Topics (in English) Neuro Computing, Medical Engineering, etc.
Chair Hayaru Shouno(UEC) / Taishin Nomura(Osaka Univ.)
Vice Chair Kazuyuki Samejima(Tamagawa Univ) / Takashi Watanabe(Tohoku Univ.)
Secretary Kazuyuki Samejima(NAIST) / Takashi Watanabe(NTT)
Assistant Takashi Shinozaki(NICT) / Ken Takiyama(TUAT) / Yasuyuki Suzuki(Osaka Univ.) / Akihiro Karashima(Tohoku Inst. of Tech.)

Paper Information
Registration To Technical Committee on Neurocomputing / Technical Committee on ME and Bio Cybernetics
Language ENG
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Modular Reservoir Network for Pattern Recognition
Sub Title (in English)
Keyword(1) LSMReservoir computingModularPattern recognitionPattern recognitionHough transformVisual cortex
1st Author's Name Yifan Dai
1st Author's Affiliation Tohoku University(Tohoku Univ.)
2nd Author's Name Masao Sakuraba
2nd Author's Affiliation Tohoku University(Tohoku Univ.)
3rd Author's Name Shigeo Sato
3rd Author's Affiliation Tohoku University(Tohoku Univ.)
Date 2020-03-06
Paper # NC2019-110
Volume (vol) vol.119
Number (no) NC-453
Page pp.pp.199-199(NC),
#Pages 1
Date of Issue 2020-02-26 (NC)