|
|
All Technical Committee Conferences (Searched in: All Years)
|
|
Search Results: Conference Papers |
Conference Papers (Available on Advance Programs) (Sort by: Date Descending) |
|
Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
AI |
2019-07-22 16:30 |
Hokkaido |
|
Modeling of Cyber Attack Based on POMDP Kazuma Igami, Hirofumi Yamaki (Tokyo Denki Univ.) AI2019-15 |
APTs(Advanced Persistent Threats), which are a type of cyber-attack, are a major threat to information systems because t... [more] |
AI2019-15 pp.77-82 |
IBISML |
2016-11-17 14:00 |
Kyoto |
Kyoto Univ. |
Approximate Value Iteration Algorithms for Partially Observable Markov Decision Processes in Geometric Dual Representation Hiroshi Tsukahara, Mitsuru Anbai, Makoto Oobayashi (Denso IT Lab.) IBISML2016-71 |
We propose new approximate algorithms for the value iteration of partially observable Markov decision
processes (POMDPs... [more] |
IBISML2016-71 pp.177-184 |
NC, NLP |
2013-01-24 10:50 |
Hokkaido |
Hokkaido University Centennial Memory Hall |
Significance of non-stationary of dynamics for learning cooperative behavior Akihiro Tawa, Shin-ichi Maeda, Shin Ishii (Kyoto Univ.) NLP2012-108 NC2012-98 |
To understand how cooperative behaviors emerge is important in the field of multi-agent system research. Although this e... [more] |
NLP2012-108 NC2012-98 pp.25-30 |
IBISML |
2012-03-12 14:40 |
Tokyo |
The Institute of Statistical Mathematics |
Kernel Bellman Equations in POMDPs Yu Nishiyama (ISM), Abdeslam Boularias (MPI), Arthur Gretton (UCL), Kenji Fukumizu (ISM) IBISML2011-92 |
We propose to handle POMDPs in reproducing kernel Hilbert spaces (RKHSs) using recent kernel methods of embedding distri... [more] |
IBISML2011-92 pp.35-42 |
IBISML |
2012-03-12 15:30 |
Tokyo |
The Institute of Statistical Mathematics |
Apprenticeship Learning for Model Parameters of Partially Observable Environments Takaki Makino (Univ. of Tokyo), Johane Takeuchi (HRI-JP) IBISML2011-94 |
We consider apprentice learning, i.e., to make an agent learn a task by observing an expert demonstrating the task, in a... [more] |
IBISML2011-94 pp.49-54 |
NC, IPSJ-BIO [detail] |
2011-06-24 16:30 |
Okinawa |
50th Anniversary Memorial Hall, University of the Ryukyus |
Solving POMDPs using Restricted Boltzmann Machines with Echo State Networks Makoto Otsuka, Junichiro Yoshimoto, Stefan Elfwing, Kenji Doya (OIST) NC2011-19 |
A partially observable Markov decision process (POMDP) can be solved in a model-based way using explicit knowledge of th... [more] |
NC2011-19 pp.143-148 |
AI, KEWPIE |
2004-07-29 13:00 |
Kyoto |
Keihanna Plaza |
SAPS: The Exploitation Reinforcement Learning Method on POMDPs Wataru Uemura, Atsushi Ueno, Shoji Tatsumi (Osaka City Univ.) |
This paper proposes the Episode Profit Sharing(EPS) that can estimate the received rewards on partially observable marko... [more] |
AI2004-12 pp.1-5 |
|
|
|
Copyright and reproduction :
All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)
|
[Return to Top Page]
[Return to IEICE Web Page]
|