美文网首页
Q学习延伸至DDPG算法公式

Q学习延伸至DDPG算法公式

作者: 天使的白骨_何清龙 | 来源:发表于2020-07-08 14:34 被阅读0次

Q learning原始损失函数定义:

\mathbf L(\theta^Q)=\mathbb E_{s_t\sim \rho^\beta, a_t \sim \beta, r_t} \sim E \bigl[ \bigl( Q(s_t, a_t \vert \theta^Q) - y_t \bigr)^2 \bigr]

Q的贝尔曼方程:

Q^\pi(s_t, a_t) = \mathbb E_{r_t, s_{t+1}} \sim E \Bigl[r(s_t,a_t) + \gamma\mathbb E_{a_{t+1}} \sim \pi \bigl[ Q^\pi (s_{t+1, a_{t+1}}) \bigr] \Bigr]

确定性策略的Q定义:

Q^\mu(s_t, a_t)=\mathbb E_{r_t,s_{t+1}} \sim E \bigl[ r(s_t, a_t) + \gamma Q^\mu(s_{t+1}, \mu(s_{t+1})) \bigr]

  • 其中的action a就是由\mu(s_{t+1})确定的。而\mu(s)=argmax_aQ(s,a)

DPG的轨迹分布函数定义:

\bigtriangledown_{\theta^\mu}J \approx \mathbb E_{s_t \sim \rho^\beta} \bigl[ \bigtriangledown_{\theta^\mu}Q(s,a \vert \theta^Q)\vert s=s_T, a=s_t \vert \theta^\mu \mu (s_t \vert \theta^\mu) \bigr]
\qquad\quad = \mathbb E_{s_t \sim \rho^\beta} \bigl[ \bigtriangledown_{a}Q(s,a \vert \theta^Q)\vert s=s_T, a = \mu (s_t) \triangledown_{\theta} \mu(s_t \vert \theta^\mu)) \vert s=s_t \bigr]

DDPG改进:

  • 利用分布式独立探索,在策略中加入一个来自轨迹N的噪音
    \mu^{'}(s_t) = \mu(s_t \vert \theta_t^\mu) + N
  • Loss function:
    L = {1 \over N}\sum_i (y_i - Q(s_i, a_i \vert \theta^Q))^2
    定义:\qquad y_i = r_i + \gamma Q^{'}(s_{i+1}, \mu^{'}(s_i+1 \vert \theta^{\mu^{'}}) \vert \theta^{Q^{'}})
  • 参数更新方式,2个部分:
    \theta ^ {Q^{'}} \leftarrow \tau \theta ^ Q + (1-\tau) \theta ^ {Q^{'}}
    \theta ^ {\mu ^ {'}} \leftarrow \tau \theta ^\mu +(1 - \tau)\theta^{\mu^{'}}

相关文章

网友评论

      本文标题:Q学习延伸至DDPG算法公式

      本文链接:https://www.haomeiwen.com/subject/esxccktx.html