Presentation 1996/5/24
The Relation between the Precision of the Weights and the Output Error in Multi-Layer Neural Networks
Kazushi IKEDA, Akihiro SUZUKI, Kenji NAKAYAMA,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) In case of implementation of neural networks, it is important to decreace the number of bits of the weights in the neuron to the sake of reduction of the size of circuits. In this paper, we have analyzed how much the output error increases when the weights have less precision in the case of both of the fixed point expression and the floating point expression in the computer. The results have been supported by computer simulations.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Neural Network / Low-Precision / Quantization
Paper # NC96-1
Date of Issue

Conference Information
Committee NC
Conference Date 1996/5/24(1days)
Place (in Japanese) (See Japanese page)
Place (in English)
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair
Vice Chair
Secretary
Assistant

Paper Information
Registration To Neurocomputing (NC)
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) The Relation between the Precision of the Weights and the Output Error in Multi-Layer Neural Networks
Sub Title (in English)
Keyword(1) Neural Network
Keyword(2) Low-Precision
Keyword(3) Quantization
1st Author's Name Kazushi IKEDA
1st Author's Affiliation Fac. Engineering, Kanazawa Univ.()
2nd Author's Name Akihiro SUZUKI
2nd Author's Affiliation Fac. Engineering, Kanazawa Univ.
3rd Author's Name Kenji NAKAYAMA
3rd Author's Affiliation Fac. Engineering, Kanazawa Univ.
Date 1996/5/24
Paper # NC96-1
Volume (vol) vol.96
Number (no) 76
Page pp.pp.-
#Pages 8
Date of Issue