Presentation 2002/3/8
Auto-Associative Memories Based on Recurrent Multilayer Perceptrons and Sparsely Interconnected Neural Networks
Takeshi KAMIO, Mititada MORISUE,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Both quantized neural networks (QNNs) and sparsely interconnected neural networks (SINNs) are suitable for hardware implementation. However, quantized parameters and sparsely interconnected structure decrease the capabilities of QNNs and SINNs, respectively. In this report, we propose associative memories composed of recurrent multilayer perceptrons (RMLPs) with 3-valued weights and SINNs to improve their capabilities at a low cost.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Auto-Associative Memories / Sparsely Interconnected Neural Networks / Recurrent Multilayer Perceptrons
Paper # NLP2001-110
Date of Issue

Conference Information
Committee NLP
Conference Date 2002/3/8(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Nonlinear Problems (NLP)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Auto-Associative Memories Based on Recurrent Multilayer Perceptrons and Sparsely Interconnected Neural Networks
Sub Title (in English)
Keyword(1) Auto-Associative Memories
Keyword(2) Sparsely Interconnected Neural Networks
Keyword(3) Recurrent Multilayer Perceptrons
1st Author's Name Takeshi KAMIO
1st Author's Affiliation Department of Information Machines and Interfaces, Faculty of Information Sciences, Hiroshima City University()
2nd Author's Name Mititada MORISUE
2nd Author's Affiliation Department of Information Machines and Interfaces, Faculty of Information Sciences, Hiroshima City University
Date 2002/3/8
Paper # NLP2001-110
Volume (vol) vol.101
Number (no) 723
Page pp.pp.-
#Pages 8
Date of Issue