深度强化学习--TRPO与PPO实现

作者: Daisy丶 | 来源:发表于2019-03-29 13:40 被阅读5次

    PPO是2017年由OpenAI提出的一种基于随机策略的DRL算法,它不仅有很好的性能(尤其是对于连续控制问题),同时相较于之前的TRPO方法更加易于实现。PPO算法也是当前OpenAI的默认算法,是策略算法的最好实现。

    本文实现的PPO是参考莫烦的TensorFlow实现,因为同样的代码流程在使用Keras实现时发生训练无法收敛的问题,暂时还未找到原因。

    Paper
    TRPO:Trust Region Policy Optimization
    PPO:Proximal Policy Optimization Algorithms

    OpenAI PPO Blog:Proximal Policy Optimization

    Githubhttps://github.com/xiaochus/Deep-Reinforcement-Learning-Practice

    环境

    • Python 3.6
    • Tensorflow-gpu 1.8.0
    • Keras 2.2.2
    • Gym 0.10.8

    TRPO

    策略梯度算法PG、DDPG等在离散动作空间和连续动作空间都取得了很好的成果,这一系列算法的梯度更新满足如下关系:

    update

    策略梯度的方法取得好的结果存在着一些难度,因为这类方法对迭代步骤数(步长)非常敏感:如果选得太小,训练过程就会慢得令人绝望;如果选得太大,反馈信号就会淹没在噪声中,甚至有可能让模型表现雪崩式地下降。这类方法的采样效率也经常很低,学习简单的任务就需要百万级至十亿级的总迭代次数。

    所谓合适的步长是指当策略更新后,回报函数的值不能更差。如何选择这个步长?或者说,如何找到新的策略使得新的回报函数的值单调增,或单调不减。TRPO的核心就是解决不好确定 Learning rate (或者 Step size) 的问题。

    TRPO的做法是将新的策略所对应的回报函数分解成旧的策略所对应的回报函数+其他项。只要新的策略所对应的其他项大于等于零,那么新的策略就能保证回报函数单调不减。


    reward

    具体的TRPO原理以及公式推导可以参考TRPO这篇文章,写的非常好,下面直接使用结论。

    上述reward可以展开为下式:

    reward-1

    TRPO问题为:


    TRPO

    最终TRPO问题化简为:


    TRPO

    PPO

    PPO是基于Actor-Critic架构实现的一种策略算法, 属于TRPO的进阶版本。

    PPO

    PPO1

    PPO1对应的 Policy 更新公式为:


    kl

    在TRPO里,我们希望θ和θ'不能差太远,这并不是说参数的值不能差太多,而是说,输入同样的state,网络得到的动作的概率分布不能差太远。为了得到动作的概率分布的相似程度,可以用KL散度来计算。这个Policy的实现如下:

      self.tflam = tf.placeholder(tf.float32, None, 'lambda')
      kl = tf.distributions.kl_divergence(old_nd, nd)
      self.kl_mean = tf.reduce_mean(kl)
      self.aloss = -(tf.reduce_mean(surr - self.tflam * kl))
    

    PPO1算法的思想很简单,既然TRPO认为在惩罚的时候有一个超参数β难以确定,因而选择了限制而非惩罚。因此PPO1通过下面的规则来避免超参数的选择而自适应地决定β:

    beta

    每完成一次训练,就通过下列公式调整一次β

      if kl < self.kl_target / 1.5:
           self.lam /= 2
      elif kl > self.kl_target * 1.5:
           self.lam *= 2
    

    PPO2

    除此之外,原论文中还提出了另一种的方法来限制每次更新的步长,我们一般称之为PPO2,论文里说PPO2的效果要比PPO1要好,所以我们平时说PPO都是指的是PPO2,PPO2的思想也很简单,思想的起点来源于对表达式的观察。

    首先做出如下定义:


    对应的 Policy 更新公式为:


    clip

    在这种情况下可以保证两次更新之间的分布差距不大,防止了θ更新太快。

    self.aloss = -tf.reduce_mean(tf.minimum(
                        surr,
                        tf.clip_by_value(ratio, 1.- self.epsilon, 1.+ self.epsilon) * self.adv))
    

    PS:
    对于PPO1和PPO2这两个Policy,最后都要使用负值。这是因为我们定义的这个Loss其实是TRPO中新策略公式里对应的其他项,在上面我们证明了我们需要使这个项大于0,那么我们的优化目标就是最大化这个项。但是基于梯度下降的方法是要最小化Loss的,因此我们添加负值,最小化这个项的负值就是最大化这个项。

    算法实现

    使用Pendulum来实验连续值预测,PPO如下所示:

    对于连续动作,DDPG采用的action取值是直接回归,而PPO采用的依然是随机策略,Actor网络输出一个均值和方差,并返回一个由该均值和方差得到的正态分布,动作基于此正态分布进行采样。

    import os
    import gym
    import numpy as np
    import pandas as pd
    import tensorflow as tf
    
    
    class PPO:
        def __init__(self, ep, batch, t='ppo2'):
            self.t = t
            self.ep = ep
            self.batch = batch
            self.log = 'model/{}_log'.format(t)
    
            self.env = gym.make('Pendulum-v0')
            self.bound = self.env.action_space.high[0]
    
            self.gamma = 0.9
            self.A_LR = 0.0001
            self.C_LR = 0.0002
            self.A_UPDATE_STEPS = 10
            self.C_UPDATE_STEPS = 10
    
            # KL penalty, d_target、β for ppo1
            self.kl_target = 0.01
            self.lam = 0.5
            # ε for ppo2
            self.epsilon = 0.2
    
            self.sess = tf.Session()
            self.build_model()
    
        def _build_critic(self):
            """critic model.
            """
            with tf.variable_scope('critic'):
                x = tf.layers.dense(self.states, 100, tf.nn.relu)
    
                self.v = tf.layers.dense(x, 1)
                self.advantage = self.dr - self.v
    
        def _build_actor(self, name, trainable):
            """actor model.
            """
            with tf.variable_scope(name):
                x = tf.layers.dense(self.states, 100, tf.nn.relu, trainable=trainable)
    
                mu = self.bound * tf.layers.dense(x, 1, tf.nn.tanh, trainable=trainable)
                sigma = tf.layers.dense(x, 1, tf.nn.softplus, trainable=trainable)
    
                norm_dist = tf.distributions.Normal(loc=mu, scale=sigma)
    
            params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=name)
    
            return norm_dist, params
    
        def build_model(self):
            """build model with ppo loss.
            """
            # inputs
            self.states = tf.placeholder(tf.float32, [None, 3], 'states')
            self.action = tf.placeholder(tf.float32, [None, 1], 'action')
            self.adv = tf.placeholder(tf.float32, [None, 1], 'advantage')
            self.dr = tf.placeholder(tf.float32, [None, 1], 'discounted_r')
    
            # build model
            self._build_critic()
            nd, pi_params = self._build_actor('actor', trainable=True)
            old_nd, oldpi_params = self._build_actor('old_actor', trainable=False)
    
            # define ppo loss
            with tf.variable_scope('loss'):
                # critic loss
                self.closs = tf.reduce_mean(tf.square(self.advantage))
    
                # actor loss
                with tf.variable_scope('surrogate'):
                    ratio = tf.exp(nd.log_prob(self.action) - old_nd.log_prob(self.action))
                    surr = ratio * self.adv
    
                if self.t == 'ppo1':
                    self.tflam = tf.placeholder(tf.float32, None, 'lambda')
                    kl = tf.distributions.kl_divergence(old_nd, nd)
                    self.kl_mean = tf.reduce_mean(kl)
                    self.aloss = -(tf.reduce_mean(surr - self.tflam * kl))
                else: 
                    self.aloss = -tf.reduce_mean(tf.minimum(
                        surr,
                        tf.clip_by_value(ratio, 1.- self.epsilon, 1.+ self.epsilon) * self.adv))
    
            # define Optimizer
            with tf.variable_scope('optimize'):
                self.ctrain_op = tf.train.AdamOptimizer(self.C_LR).minimize(self.closs)
                self.atrain_op = tf.train.AdamOptimizer(self.A_LR).minimize(self.aloss)
    
            with tf.variable_scope('sample_action'):
                self.sample_op = tf.squeeze(nd.sample(1), axis=0)
    
            # update old actor
            with tf.variable_scope('update_old_actor'):
                self.update_old_actor = [oldp.assign(p) for p, oldp in zip(pi_params, oldpi_params)]
    
            tf.summary.FileWriter(self.log, self.sess.graph)
    
            self.sess.run(tf.global_variables_initializer())
    
        def choose_action(self, state):
            """choice continuous action from normal distributions.
    
            Arguments:
                state: state.
    
            Returns:
               action.
            """
            state = state[np.newaxis, :]
            action = self.sess.run(self.sample_op, {self.states: state})[0]
    
            return np.clip(action, -self.bound, self.bound)
    
        def get_value(self, state):
            """get q value.
    
            Arguments:
                state: state.
    
            Returns:
               q_value.
            """
            if state.ndim < 2: state = state[np.newaxis, :]
    
            return self.sess.run(self.v, {self.states: state})
    
        def discount_reward(self, states, rewards, next_observation):
            """Compute target value.
    
            Arguments:
                states: state in episode.
                rewards: reward in episode.
                next_observation: state of last action.
    
            Returns:
                targets: q targets.
            """
            s = np.vstack([states, next_observation.reshape(-1, 3)])
            q_values = self.get_value(s).flatten()
    
            targets = rewards + self.gamma * q_values[1:]
            targets = targets.reshape(-1, 1)
    
            return targets
    
    # not work.
    #    def neglogp(self, mean, std, x):
    #        """Gaussian likelihood
    #        """
    #        return 0.5 * tf.reduce_sum(tf.square((x - mean) / std), axis=-1) \
    #               + 0.5 * np.log(2.0 * np.pi) * tf.to_float(tf.shape(x)[-1]) \
    #               + tf.reduce_sum(tf.log(std), axis=-1)
    
        def update(self, states, action, dr):
            """update model.
    
            Arguments:
                states: states.
                action: action of states.
                dr: discount reward of action.
            """
            self.sess.run(self.update_old_actor)
    
            adv = self.sess.run(self.advantage,
                                {self.states: states,
                                 self.dr: dr})
    
            # update actor
            if self.t == 'ppo1':
                # run ppo1 loss
                for _ in range(self.A_UPDATE_STEPS):
                    _, kl = self.sess.run(
                        [self.atrain_op, self.kl_mean],
                        {self.states: states,
                         self.action: action,
                         self.adv: adv,
                         self.tflam: self.lam})
    
                if kl < self.kl_target / 1.5:
                    self.lam /= 2
                elif kl > self.kl_target * 1.5:
                    self.lam *= 2
            else:
                # run ppo2 loss
                for _ in range(self.A_UPDATE_STEPS):
                    self.sess.run(self.atrain_op,
                                  {self.states: states,
                                   self.action: action,
                                   self.adv: adv})
    
            # update critic
            for _ in range(self.C_UPDATE_STEPS):
                self.sess.run(self.ctrain_op,
                              {self.states: states,
                               self.dr: dr})
    
        def train(self):
            """train method.
            """
            tf.reset_default_graph()
    
            history = {'episode': [], 'Episode_reward': []}
    
            for i in range(self.ep):
                observation = self.env.reset()
    
                states, actions, rewards = [], [], []
                episode_reward = 0
                j = 0
    
                while True:
                    a = self.choose_action(observation)
                    next_observation, reward, done, _ = self.env.step(a)
                    states.append(observation)
                    actions.append(a)
    
                    episode_reward += reward
                    rewards.append((reward + 8) / 8)
    
                    observation = next_observation
    
                    if (j + 1) % self.batch == 0:
                        states = np.array(states)
                        actions = np.array(actions)
                        rewards = np.array(rewards)
                        d_reward = self.discount_reward(states, rewards, next_observation)
    
                        self.update(states, actions, d_reward)
    
                        states, actions, rewards = [], [], []
    
                    if done:
                        break
                    j += 1
    
                history['episode'].append(i)
                history['Episode_reward'].append(episode_reward)
                print('Episode: {} | Episode reward: {:.2f}'.format(i, episode_reward))
    
            return history
    
        def save_history(self, history, name):
            name = os.path.join('history', name)
    
            df = pd.DataFrame.from_dict(history)
            df.to_csv(name, index=False, encoding='utf-8')
    
    
    if __name__ == '__main__':
        model = PPO(1000, 32, 'ppo1')
        history = model.train()
        model.save_history(history, 'ppo1.csv')
    
    

    实验结果

    可以看出PPO能够成功收敛,并且PPO2收敛的比PPO1要快。

    PPO

    相关文章

      网友评论

        本文标题:深度强化学习--TRPO与PPO实现

        本文链接:https://www.haomeiwen.com/subject/mwgndqtx.html