美文网首页
Concrete dropout

Concrete dropout

作者: 朱小虎XiaohuZhu | 来源:发表于2018-01-14 17:34 被阅读119次

    一种采用贝叶斯学习的dropoout方法变体

    Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary— a prohibitive operation with large models, and an impossible one with RL.

    We propose a new dropout variant which gives improved performance and better
    calibrated uncertainties. Relying on recent developments in Bayesian deep learning,
    we use a continuous relaxation of dropout’s discrete masks.

    Together with a principled optimisation objective, this allows for automatic tuning of the dropout
    probability in large models, and as a result faster experimentation cycles.

    In RL this allows the agent to adapt its uncertainty dynamically as more data is observed.

    We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.

    相关文章

      网友评论

          本文标题:Concrete dropout

          本文链接:https://www.haomeiwen.com/subject/gbzkoxtx.html