美文网首页DeepMind
Deep Reinforcement Learning from

Deep Reinforcement Learning from

作者: 朱小虎XiaohuZhu | 来源:发表于2016-04-01 00:45 被阅读253次

    作者:Johannes Heinrich J.HEINRICH@CS.UCL.AC.UK
    David Silver D.SILVER@CS.UCL.AC.UK
    University College London, UK
    摘要:
    Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior
    work has focused on computing Nash equilibria in a handcrafted abstraction of the domain.
    In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without any prior knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Hold’em, a poker game of real world scale, NFSP learnt a competitive strategy
    that approached the performance of human experts and state-of-the-art methods.

    相关文章

      网友评论

        本文标题:Deep Reinforcement Learning from

        本文链接:https://www.haomeiwen.com/subject/lkfylttx.html