Summary
International Symposium on Nonlinear Theory and its Applications
2005
Session Number:2-1-3
Session:
Number:2-1-3-1
An interpretative recurrent neural network to improve pattern storing capabilities - dynamical considerations
Colin Molter, Utku Salihoglu, Hugues Bersini,
pp.586-589
Publication Date:2005/10/18
Online ISSN:2188-5079
DOI:10.34385/proc.40.2-1-3-1
PDF download (166.2KB)
Summary:
Seminal observations performed by Skarda and Freeman [1] on the olfactory bulb of rabbits during cognitive tasks have suggested to locate the basal state of behavior in the network’s spatio-temporal dynamics. Following these neurophysiological observations, a new learning task for recurrent neural networks has been proposed by the authors in recent papers [2], [3]. This task consists in storing information in spatio-temporal dynamical attractors of these artificial networks. Two innovative learning algorithms are discussed and compared here, based on dynamical considerations. Firstly an iterative supervised Hebbian learning algorithm were the all of the information is fully specified. Secondly, an iterative unsupervised Hebbian learning algorithm were the network has to categorize external stimuli, by building itself its own internal representations.