Summary

Proceedings of the 2013 International Symposium on Nonlinear Theory and its Applications

2013

Session Number:C1L-D

Session:

Number:374

Approximated Probabilistic Inference on a Dynamic Bayesian Network Using a Multistate Neural Network

Makito Oku,  

pp.374-377

Publication Date:

Online ISSN:2188-5079

DOI:10.15248/proc.2.374

PDF download (443.9KB)

Summary:
Dynamic Bayesian networks (DBNs) are flexible tools for modeling complex relationship among time-evolving random variables. An application of DBNs to computational neuroscience is to represent the internal model, which the brain uses to simulate the environment, as a DBN. The exact inference on the DBN defines the optimal behavior for both sensory and motor signal processing. However, since the exact inference requires huge computational resources, approximation methods are the key to utilize the DBN representation. Here, I propose a new heuristic algorithm for probabilistic inference on the DBN using a multistate neural network. Each random variable of the DBN is replaced by a multistate neuron, and the directional links of the DBN are translated into the nonlinear interactions among the multistate neurons. To approximate backward dependencies among variables in the DBN, the network supports a bottom-up error-reporting mechanism against top-down predictions. The proposed method is tested on a simple partially observable Markov decision process task, and exhibits better performance than ancestral sampling method.

References:

[1] J. Pearl, Artif. Intell., 29(3):241-288, 1986.

[2] K. P. Murphy, Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, University of California, Berkeley, 2002.

[3] H. Attias, in Proc. AISTATS'03, 2003.

[4] D. Verma, R. P. N. Rao, in Proc. IROS'06, pp. 2382-2387, IEEE, 2006.

[5] M. Botvinick, J. An, Adv. Neural Info. Proc. Syst., 21:169-176, 2009.

[6] K. P. Körding, D. M. Wolpert, Nature, 427(6971):244-247, 2004.

[7] D. C. Knill, A. Pouget, Trends Neurosci., 27(12):712-719, 2004.

[8] M. Miyazaki et al., Nat. Neurosci., 9(7):875-877, 2006.

[9] Y. Sato, T. Toyoizumi, K. Aihara, Neural Comput., 19(12):3335-3355, 2007.

[10] K. Doya et al., eds., Bayesian brain: Probabilistic approaches to neural coding. The MIT Press, 2007.

[11] Y. Sato, K. Aihara, PLoS One, 6(4):e19377, 2011.

[12] K.-I. Sawai, Y. Sato, K. Aihara, Front. Psychol., 3:524, 2012.

[13] P. Berkes et al., Science, 331(6013):83-87, 2011.

[14] M. Oku, K. Aihara, SEISAN KENKYU, 65(3), 2013. (in Japanese).

[15] R. S. Sutton, A. G. Barto, Reinforcement Learning: An Introduction. MIT Press, 1998.

[16] B. W. Balleine, A. Dickinson, Neuropharmacology, 37(4-5):407-419, 1998.

[17] N. D. Daw, Y. Niv, P. Dayan, Nat. Neurosci., 8(12):1704-1711, 2005.

[18] C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.

[19] M. Oku, K. Aihara, in Proc. ICCN'11, pp. 213-219, 2011.

[20] M. Oku, K. Aihara, Phys. Lett. A, 374(48):4859-4863, 2010.

[21] R. P. N. Rao, D. H. Ballard, Nat. Neurosci., 2(1):79-87, 1999.

[22] S. O. Murray et al., Proc. Natl. Acad. Sci. U.S.A., 99(23):15164-15169, 2002.

[23] L. H. Arnal, V. Wyart, A.-L. Giraud, Nat. Neurosci., 14(6):797-801, 2011.

[24] G. F. Cooper, in Proc. UAI'88, pp. 55-63, 1988.

[25] M. L. Littman, A. R. Cassandra, L. P. Kaelbling, in Proc. ICML'95, pp. 362-370, 1995.