講演抄録/キーワード |
講演名 |
2017-11-10 13:00
バンディット環境下における優良腕識別 ○鹿野英明・本多淳也(東大/理研)・坂巻顕太郎(東大)・松浦健太郎(Johnson & Johnson)・中村篤祥(北大)・杉山 将(理研/東大) IBISML2017-82 |
抄録 |
(和) |
(まだ登録されていません) |
(英) |
In this paper, we consider and discuss a new stochastic multi-armed bandit problem called {em good arm identification} (GAI), where a good arm is an arm with expected reward greater than or equal to a given threshold. GAI is a pure-exploration problem that an agent repeats a process of outputting an arm as soon as it is identified as a good one before confirming the other arms are actually not good. The objective of GAI is to minimize the number of samples for each process. We find that GAI faces a new kind of dilemma, the {em exploration-exploitation dilemma of confidence}, while best arm identification does not. Therefore, GAI is not just an extension of the best arm identification. Actually, an efficient design of algorithms for GAI is quite different from that for best arm identification. We derive a lower bound on the sample complexity for GAI and develop an algorithm whose sample complexity almost matches the lower bound. We also confirm experimentally that the proposed algorithm outperforms a naive algorithm and a thresholding-bandit-like algorithm in synthetic settings and in settings based on medical data. |
キーワード |
(和) |
機械学習 / 強化学習 / オンライン学習 / 多腕バンディット問題 / / / / |
(英) |
Machine Learning / Reinforcement Learning / Online Learning / Multi Armed Bandits / / / / |
文献情報 |
信学技報, vol. 117, no. 293, IBISML2017-82, pp. 339-346, 2017年11月. |
資料番号 |
IBISML2017-82 |
発行日 |
2017-11-02 (IBISML) |
ISSN |
Print edition: ISSN 0913-5685 Online edition: ISSN 2432-6380 |
著作権に ついて |
技術研究報告に掲載された論文の著作権は電子情報通信学会に帰属します.(許諾番号:10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
PDFダウンロード |
IBISML2017-82 |