Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
NC, IBISML, IPSJ-BIO, IPSJ-MPS [detail] |
2023-06-30 09:30 |
Okinawa |
OIST Conference Center (Primary: On-site, Secondary: Online) |
Lipschitz bandits in unbounded metric spaces and their applications Takanobu Hara (Hokkaido Univ.) NC2023-11 IBISML2023-11 |
We consider Lipschitz multi-armed bandit problems on unbounded metric spaces. [more] |
NC2023-11 IBISML2023-11 pp.68-72 |
AI |
2022-09-15 15:05 |
Shizuoka |
(Primary: On-site, Secondary: Online) |
AI2022-22 |
(To be available after the conference date) [more] |
AI2022-22 pp.25-30 |
R |
2022-06-16 14:25 |
Online |
Online |
Applying Bandit Algorithm toward Software Review Optimization Takuto Kudo, Masateru Tsunoda (Kindai Univ.), Keitaro Nakasai (NIT, Kagoshima College) R2022-7 |
(To be available after the conference date) [more] |
R2022-7 pp.7-12 |
CCS, NLP |
2022-06-10 14:50 |
Osaka |
(Primary: On-site, Secondary: Online) |
Optimal preference satisfaction for conflict-free joint decisions Hiroaki Shinkawa, Nicolas Chauvet (Univ. Tokyo), Guillaume Bachelier (Univ. Grenoble Alpes), Andre Roehm, Ryoichi Horisaki, Makoto Naruse (Univ. Tokyo) NLP2022-20 CCS2022-20 |
We all have preferences when multiple choices are available. If we insist on satisfying our preferences only, we may suf... [more] |
NLP2022-20 CCS2022-20 pp.100-105 |
PRMU |
2019-12-20 10:30 |
Oita |
|
Basic Study and Application of Discrimination Problem of Classifiers Michiya Abe (UTokyo/NII), Shin'ichi Satoh (NII) PRMU2019-56 |
We proposed discrimination problem of classifiers, which aims to discriminate
two classifiers are same in the sense of... [more] |
PRMU2019-56 pp.61-67 |
IBISML |
2018-11-05 15:10 |
Hokkaido |
Hokkaido Citizens Activites Center (Kaderu 2.7) |
[Poster Presentation]
Algorithms for Checking Existence of a Bad Arm Utilizing Asymmetry Koji Tabata, Atsuyoshi Nakamura, Tamiki Komatsuzaki (Hokkaido Univ.) IBISML2018-91 |
We study a problem called a bad arm existence checking problem in which, given two thresholds $theta_L$ and $theta_U$ ($... [more] |
IBISML2018-91 pp.353-360 |
IBISML |
2017-11-10 13:00 |
Tokyo |
Univ. of Tokyo |
Good Arm Identification via Bandit Feedback Hideaki Kano, Junya Honda (UTokyo/RIKEN), Kentaro Sakamaki (UTokyo), Kentaro Matsuura (Johnson & Johnson), Atsuyoshi Nakamura (HU), Masashi Sugiyama (RIKEN/UTokyo) IBISML2017-82 |
In this paper, we consider and discuss a new stochastic multi-armed bandit problem called {em good arm identification} (... [more] |
IBISML2017-82 pp.339-346 |
NLP |
2017-07-13 15:20 |
Okinawa |
Miyako Island Marine Terminal |
Improving Performance of UCB1-tuned Algorithm by Lebesgue Spectrum Filter Xinyu Cho, Kaori Kuroda, Yukio Murata (TUS), Song-Ju Kim (NIMS), Makoto Naruse (NICT), Mikio Hasegawa (TUS) NLP2017-36 |
In the researches on asynchronous chaotic CDMA, it has been shown that the sequences, which have negative autocorrelatio... [more] |
NLP2017-36 pp.47-52 |
NS, IN (Joint) |
2017-03-03 14:50 |
Okinawa |
OKINAWA ZANPAMISAKI ROYAL HOTEL |
Coalition formation approach for cooperative spectrum sensing in cognitive radio networks by multi-armed bandit problem Sho Iizuka, Jun Kawahara, Shoji Kasahara (NAIST) NS2016-242 |
In cognitive radio networks, it is important for secondary users (SUs) to accurately sense wireless channels for primary... [more] |
NS2016-242 pp.487-492 |
AI |
2016-12-09 17:30 |
Oita |
|
Incentive design in crowdsourcing as a multi-armed bandit problem
-- A case of more than one requesters -- Shigeo Matsubara, Akihiko Itoh (Kyoto Univ.) AI2016-23 |
We have examined to apply the multi-armed bandit (MAB) techniques to an incentive design problem in crowdsourcing. So fa... [more] |
AI2016-23 pp.61-66 |
AI |
2015-06-18 16:30 |
Tokyo |
|
Proposed stopping rule of exploration of Budget-limited multi-armed-bandit algorithm LAKUBE Makoto Niimi, Takayuki Ito (NIT) AI2015-10 |
We focus on the budget-limited multi-armed bandit(BL-MAB) problems.
In BL-MAB problems, the agent's actions are costly ... [more] |
AI2015-10 pp.55-60 |
RCS, SR, SRW (Joint) |
2015-03-06 10:50 |
Tokyo |
Tokyo Institute of Technology |
Secure Channel Selection based on UCB Algorithm with Secrecy Capacity Masahiro Endo, Tomoaki Ohtsuki (Keio Univ.), Takeo Fujii (Univ. of Electro-Comm.), Osamu Takyu (Shinshu Univ.) RCS2014-356 |
Recently, some papers, which apply a multi-armed bandit algorithm for
channel selection in a cognitive radio system, ha... [more] |
RCS2014-356 pp.327-332 |
COMP |
2013-09-03 10:45 |
Tottori |
|
A New UCB-based Algorithm for the Matching-Selection Multi-armed Bandit Problem Ryo Watanabe, Atsuyoshi Nakamura, Mineichi Kudo (Hokkaido Univ.) COMP2013-26 |
Permutation bandit problem is a kind of combinatorial multi-armed bandit problem in which an $M$-permutation of $N$ elem... [more] |
COMP2013-26 pp.9-16 |
IBISML |
2012-06-20 11:00 |
Kyoto |
Campus plaza Kyoto |
On a Non-asymptotic Analysis Using Large Deviation Principles in the Multiarmed Bandit Problem Junya Honda, Akimichi Takemura (Univ. of Tokyo) IBISML2012-10 |
In reinforcement learning a tradeoff between exploration and exploitation is considered.
Multiarmed bandit problems for... [more] |
IBISML2012-10 pp.65-72 |