More Adaptive Algorithms for Adversarial Bandits
Authors: Chen-Yu Wei, Haipeng Luo
Institute: University of Southern California
Abstract
We develop a novel and generic algorithm for the adversarial multi-armed bandit problem (or more generally the combinatorial semi-bandit problem). When instantiated differently, our algorithm achieves various new data-dependent regret bounds improving previous work. Examples include:
- a regret bound depending on the variance of only the best arm;
- a regret bound depending on the first-order path-length of only the best arm;
- a regret bound depending on the sum of the first-order path-lengths of all arms as well as an important negative term, which together lead to faster convergence rates for some normal form games with partial feedback;
- a regret bound that simultaneously implies small regret when the best arm has small loss and logarithmic regret when there exists an arm whose expected loss is always smaller than those of other arms by a fixed gap (e.g. the classic i.i.d. setting).
In some cases, such as the last two results, our algorithm is completely parameter-free.
The main idea of our algorithm is to apply the optimism and adaptivity techniques to the wellknown Online Mirror Descent framework with a special log-barrier regularizer. The challenges are to come up with appropriate optimistic predictions and correction terms in this framework. Some of our results also crucially rely on using a sophisticated increasing learning rate schedule.
Keywords: multi-armed bandit, semi-bandit, adaptive regret bounds, optimistic online mirror descent, increasing learning rate
网友评论