美文网首页
bandit-variants 1

bandit-variants 1

作者: 三余寻真 | 来源:发表于2015-01-11 20:50 被阅读83次

There are a lot of variants of bandit problem. In NIPS'2014, there are eight papers on bandit. I think it's a good sign to represent how important bandit problem is for researchers as well as online problems. They all focus on different variants of bandit problems.

Only one of them is on theory of bandit convex optimization, which is a special case of online convex optimization. This is studied by Elad Hazan, who has published two papers in NIPS'2014 and now joins Princeton. He is preparing for an online convex optimization book and is one of committees for COLT. But nowadays researchers always focus on different variants but rarely on theory analysis because it is rather hard to develop new techniques for analysis. Usually they will learn basic skills from Auer, one of which I have mentioned in last article.

Two of them change the expressions of target function-regret. One focuses on extreme reward but accumulated reward, that is, the maximum of obtained reward so far. Then compare it with the maximum of a single arm pulled an equal times. This variant is useful in detecting network intrusion which is determined by the heaviest load in the whole network. The other one is focused on non-stationary rewards. As I have explained before about stochastic bandits, reward of each arm belongs to a fixed distribution. But this paper assumes the reward distribution changes little over time, allowing more freedom in stochastic bandit problems.

There are two of them change the process for getting feedback, that is, getting the reward from the environment. One gives more observations while the other one omit some. The first one gives more than one reward of pulled arm. It gives a graph structure of all arms then you can get reward of arms when there is an edge from pulled arm to it. The other one doesn't give you reward information if you change arm this turn, that is, arm of this turn is different from last turn. In the original setting, we can only get one reward information. Using such a limited information, we should still inference how to act next. To be honest, it is really hard. So when considering variants of feedbacks, we will not limit more but usually give more. Therefore, the second one is somewhat a limit for limiting information.

Usually we consider those arms are independent. But there is one on structured bandits where reward of one arm is depending on other arms. In the original version of stochastic bandits, we focus on actual means of their rewards. But in this paper, these means share a common parameter which stands for dependence. This dependence is not so oblivious but very general.

One of them is on linear bandits which I have discussed little but a common one. And it studies best-arm identification except accumulated regrets. Usually in bandits, there are three popular aims, accumulated regret, best-arm in fixed turns and least turns to get best-arm. The three are of equal importance and all have a lot of papers. The last one of the eight papers is on best-superArm in combinatorial bandits. It is strange and also reasonable to have only one on combinatorial topic. The past combinatorial papers mainly generalize old techniques but not developing new ones. It is difficult to study original bandits, let alone combinatorial bandits.

To further researches on this topic, one can find a reasonable variant and develop an efficient algorithm for it which is easiest approach. One can also develop new techniques for online convex analysis which is hardest.

相关文章

  • bandit-variants 1

    There are a lot of variants of bandit problem. In NIPS'20...

  • 1▪1▪1▪1▪1

    今天是国际劳动节,出门看人头,上路遇堵车,处处挤破头,急哭也停不下车。 不如歇了吧 ...

  • 1+1+1…+1=1

    对“一”的理解: 赠人玫瑰,不仅仅是手留余香。 利益他人,实际上也疗愈了自己。 利他、利己,如此往复循环, 最终利...

  • (-1)×(-1)= 1

    数学家经过很长一段时间才认识到(-1)×(-1)= 1是不能被证明的(即使大数学家欧拉曾给出不能令人信服的...

  • 1-2-1-1-1

    【下马请罪】 子龙下马,向张飞跪地请罪道:“张将军,一时失手……”话未停,便被张飞一矛刺了个透心凉。子龙堵着胸口汩...

  • 1 1:1 1(原创小说)

    闪回:那天她…… 当时,我确实听到了那个声音,可如今却怎么也记不清了。 掉下来了。 我觉得,那一刻...

  • 《1+1=1-1》

    十一月十一日晚,致X小姐。 十月初九, 一个人购物的孤独, 你谈起, 月光下轧过的马路, 金钱带不来满足, 忙忙碌...

  • 1+1=-1

    结婚育子这几年,在磕磕碰碰中一路走来,才恍然大悟,自己真正的成长,始于育儿。 婚前是父母的公主,虽说家境贫困,却得...

  • 1+1<1

    也许有人看到我的标题就会来质疑我,说我怎么连最简单的数学都不会。1+1=2>1啊,这么简单的算数题,我怎会不知?但...

  • 1+1=-1

    看到他人发表文章,我也有点手痒痒了~这是接着上回文章的下半部分,有点长,没人看到就好了︿︿ 这个第二件小事的主题就...

网友评论

      本文标题:bandit-variants 1

      本文链接:https://www.haomeiwen.com/subject/hlhqxttx.html