Presentation 1995/4/27
A Sparse Memory-Access Neural Network Engine with 96 Parallel Data-Driven Processing Units
Kimihisa Aihara, Osamu Fujita, Kuniharu Uchimura,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) This chip has a peak performance of 30GCPS and contains 96 parallel data-driven 22b processing units and 12,288 synapse weight (16b) memories. It reduces the number of accesses to synapse weight memories and neuron calculations without an accuracy penalty. In a pattern recognition example, the number is reduced to 0.87% of that in a conventional method and the practical performance is 18GCPS.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) neural network / neuron / synapse / digital / data-driven
Paper #
Date of Issue

Conference Information
Committee CPSY
Conference Date 1995/4/27(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Computer Systems (CPSY)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) A Sparse Memory-Access Neural Network Engine with 96 Parallel Data-Driven Processing Units
Sub Title (in English)
Keyword(1) neural network
Keyword(2) neuron
Keyword(3) synapse
Keyword(4) digital
Keyword(5) data-driven
1st Author's Name Kimihisa Aihara
1st Author's Affiliation NTT LSI Laboratories()
2nd Author's Name Osamu Fujita
2nd Author's Affiliation NTT LSI Laboratories
3rd Author's Name Kuniharu Uchimura
3rd Author's Affiliation NTT LSI Laboratories
Date 1995/4/27
Paper #
Volume (vol) vol.95
Number (no) 20
Page pp.pp.-
#Pages 8
Date of Issue