講演抄録/キーワード |
講演名 |
2009-07-13 10:30
Composition of Feature Space and State Space Dynamics Models for Model-based Reinforcement Learning ○Akihiko Yamaguchi・Jun Takamatsu・Tsukasa Ogasawara(NAIST) NLP2009-15 NC2009-8 |
抄録 |
(和) |
Learning a dynamics model and a reward model during reinforcement learning is a useful way, since the agent can also update its value function by using the models. In this paper, we propose a general dynamics model that is a composition of the feature space dynamics model and the state space dynamics model. This way enables to obtain a good generalization from a small number of samples because of the linearity of the state space dynamics, while it does not lose the accuracy. We demonstrate the simulation comparison of some dynamics models used together with a Dyna algorithm. |
(英) |
Learning a dynamics model and a reward model during reinforcement learning is a useful way, since the agent can also update its value function by using the models. In this paper, we propose a general dynamics model that is a composition of the feature space dynamics model and the state space dynamics model. This way enables to obtain a good generalization from a small number of samples because of the linearity of the state space dynamics, while it does not lose the accuracy. We demonstrate the simulation comparison of some dynamics models used together with a Dyna algorithm. |
キーワード |
(和) |
/ / / / / / / |
(英) |
reinforcement learning / model-based reinforcement learning / Dyna-style planning / prioritized sweeping / dynamics model / / / |
文献情報 |
信学技報, vol. 109, no. 125, NC2009-8, pp. 7-12, 2009年7月. |
資料番号 |
NC2009-8 |
発行日 |
2009-07-06 (NLP, NC) |
ISSN |
Print edition: ISSN 0913-5685 Online edition: ISSN 2432-6380 |
著作権に ついて |
技術研究報告に掲載された論文の著作権は電子情報通信学会に帰属します.(許諾番号:10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
PDFダウンロード |
NLP2009-15 NC2009-8 |
|