美文网首页
LEARNING TO UNDERSTAND GOAL SPEC

LEARNING TO UNDERSTAND GOAL SPEC

作者: 朱小虎XiaohuZhu | 来源:发表于2018-10-10 11:50 被阅读27次

    LEARNING TO UNDERSTAND GOAL SPECIFICATIONS
    BY MODELLING REWARD
    Dzmitry Bahdanau∗
    MILA
    University of Montreal
    Montreal, Canada
    dimabgv@gmail.com
    Felix Hill
    DeepMind
    felixhill@google.com
    Jan Leike
    DeepMind
    leike@google.com
    Edward Hughes
    DeepMind
    edwardhughes@google.com
    Pushmeet Kohli
    DeepMind
    pushmeet@google.com
    Edward Grefenstette
    DeepMind
    etg@google.com
    ABSTRACT
    Recent work has shown that deep reinforcement-learning agents can learn to follow
    language-like instructions from infrequent environment rewards. However, this
    places on environment designers the onus of designing language-conditional reward
    functions which may not be easily or tractably implemented as the complexity of
    the environment and the language scales. To overcome this limitation, we present
    a framework within which instruction-conditional RL agents are trained using
    rewards obtained not from the environment, but from reward models which are
    jointly trained from expert examples. As reward models improve, they learn to
    accurately reward agents for completing tasks for environment configurations—and
    for instructions—not present amongst the expert data. This framework effectively
    separates the representation of what instructions require from how they can be
    executed. In a simple grid world, it enables an agent to learn a range of commands
    requiring interaction with blocks and understanding of spatial relations and underspecified
    abstract arrangements. We further show the method allows our agent to
    adapt to changes in the environment without requiring new expert examples.

    相关文章

      网友评论

          本文标题:LEARNING TO UNDERSTAND GOAL SPEC

          本文链接:https://www.haomeiwen.com/subject/vfdpaftx.html