Summary

The 2018 International Symposium on Information Theory and Its Applications (ISITA2018)

2018

Session Number:Mo-AM-1-2

Session:

Number:Mo-AM-1-2.3

Introduction to Bandit Convex Optimization Algorithms

Wataru Kumagai,  

pp.36-39

Publication Date:2018/10/18

Online ISSN:2188-5079

DOI:10.34385/proc.55.Mo-AM-1-2.3

PDF download

PayPerView

Summary:
In the theory of convex optimization, the derivative of the objective function including the gradient and the Hessian is typically used to search the optimum point. However, in realworld applications, it may be that the concrete form of the function is unknown or the computation of the derivative is difficult, and thus, the derivative of the objective function is unavailable. To solve the problem, optimization methods without the derivative have been developed in recent years. Such methods are called bandit optimization methods in the machine learning community. In this paper, we introduce basic bandit optimization algorithms and explain their performance.