美文网首页
A Lyapunov-based Approach to Saf

A Lyapunov-based Approach to Saf

作者: 朱小虎XiaohuZhu | 来源:发表于2019-02-15 00:47 被阅读6次

    A Lyapunov-based Approach to Safe Reinforcement Learning

    Yinlam Chow, Ofir Nachum, Mohammad Ghavamzadeh and Edgar Duenez-Guzman

    Abstract

    In many real-world reinforcement learning (RL) problems, besides optimizing the main objective function, an agent must concurrently avoid violating a number of constraints.

    In particular, besides optimizing performance, it is crucial to guarantee the safety of an agent during training as well as deployment (e.g., a robot should avoid taking actions - exploratory or not - which irrevocably harm its hardware).

    To incorporate safety in RL, we derive algorithms under the framework of constrained Markov decision processes (CMDPs), an extension of the standard Markov decision processes (MDPs) augmented with constraints on expected cumulative costs. Our approach hinges on a novel Lyapunov method.

    We define and present a method for constructing Lyapunov functions, which provide an effective way to guarantee the global safety of a behavior policy during training via a set of local linear constraints.

    Leveraging these theoretical underpinnings, we show how to use the Lyapunov approach to systematically transform dynamic programming (DP) and RL algorithms into their safe counterparts. To illustrate their effectiveness, we evaluate these algorithms in several CMDP planning and decision-making tasks on a safety benchmark domain.

    Our results show that our proposed method significantly outperforms existing baselines in balancing constraint satisfaction and performance.

    相关文章

      网友评论

          本文标题:A Lyapunov-based Approach to Saf

          本文链接:https://www.haomeiwen.com/subject/tzuueqtx.html