Chapter 9

作者: MasterXiong | 来源:发表于2021-01-12 20:48 被阅读0次

Chapter 9: On-policy Prediction with Approximation

From this chapter, we move from tabular methods to approximate methods to tackle the curse of dimension in the state space. Instead of storing a lookup table for state values in tabular methods, approximate methods learn state values with function approximation, i.e., \hat{v}(s, w) \approx v_\pi(s).
However, approximate methods are not simple combination of RL and supervised learning. Compared to tabular RL methods, approximate methods introduce the challenge of generalization, i.e., the change of w based on one state will also change the value of all other states, while the values of different states are decoupled in tabular case. In other words, with function approximation, we have lost the policy improvement theorem under the tabular case. Compared to standard supervised learning on a static distribution, function approximation in RL raises new issues such as nonstationarity (the training samples are collected online from a time-variant policy), bootstrapping (the learning target itself is dependent on the parameters), and delayed targets.
This chapter starts from the simplest case, i.e., on-policy prediction (value estimation) with approximation given a fixed policy.

The Prediction Objective

The prediction problem can be seen as a supervised learning problem, where the data distribution is the on-policy distribution \mu(s) generated by the policy \pi. The on-policy distribution is the normalized fraction of time spent in s.
Under the on-policy distribution, the learning objective is defined as \overline{VE}(w) = \sum_{s \in \mathcal{S}} \mu(s) \big[ v_\pi(s) - \hat{v}(s, w) \big]^2. However, we need to note that

Remember that our ultimate purpose--the reason we are learning a value function--is to find a better policy. The best value function for this purpose is not necessarily the best for minimizing \overline{VE}. Nevertheless, it is not yet clear what a more useful alternative goal for value prediction might be.

Stochastic-gradient and Semi-gradient Methods

If we know the true state values, then we can learn w with standard SGD as follows: w_{t+1} = w_t - \frac{1}{2} \alpha \nabla \big[ v_\pi(S_t) - \hat{v}(S_t, w_t) \big]^2 = w_t + \alpha \big[ v_\pi(S_t) - \hat{v}(S_t, w_t) \big] \nabla \hat{v}(S_t, w_t). However, the challenge in RL is that we don't have a ground-true v_\pi(S_t) as in supervised learning. Instead, we need to use a backup estimation U_t as the target.
If U_t is an unbiased estimate, like in MC (U_t = G_t), then w_t is guaranteed to converge to a local minimum under the usual stochastic approximation conditions for decreasing \alpha.
However, for TD, our alternative target R + \gamma \hat{v}(S', w) is not independent of w_t. Consequently, we can not apply standard SGD, but use semi-gradient methods for update, i.e., only take into account the gradient of w w.r.t. the current estimate, while ignore its gradient w.r.t. the target part. Altough semi-gradient methods converge less robustly, they do converge reliably in the linear case, and more importantly, they typically enable significantly faster and fully continual and online learning.

Linear Methods and Least-Squares TD

When the approximation function is linear, we can write the semi-gradient update explicitly as:
\begin{aligned} w_{t+1} &= w_t + \alpha (R_{t+1} + \gamma w_t^T x_{t+1} - w_t^T x_t) x_t \\ &= w_t + \alpha \big[ R_{t+1} x_t - x_t(x_t - \gamma x_{t+1})^T w_t \big] \end{aligned} In expectation, we have \mathbb{E}[w_{t+1}|w_t] = w_t + \alpha (b - Aw_t), where b = \mathbb{E} [R_{t+1} x_t], and A = \mathbb{E}[x_t (x_t - \gamma x_{t+1})^T]. Thus the converged solution, i.e., the TD fixed point, satisfies w_{\text{TD}} = A^{-1}b. Consequently, instead of using iterative algorithm like SGD, we can directly compute the closed-form solution for linear methods. This is known as the least-squared TD algorithm, and its complexity is O(d^2), where d is the state space dimension. Matrix inverse typically requires a complexity of O(d^3). However, the matrix \hat{A} is the sum of vector outer product, thus its inverse can be computed more efficiently using the Sherman-Morrison formula.

相关文章

  • Week 1 Note

    Words and Expressions (Chapter1-9) Chapter 1&2 (The Trans...

  • Chapter 9

    Mrs. Wilson说自己嫁给Mr. Wilson只是因为以为他是个绅士,但刚结婚她就意识到自己错了;她还讲述了...

  • Chapter——9

    路上风很大 行人匆匆 神色几许 我不知道他们的故事 奔波的理由 就像我 低着头赶路 偶尔抬头 被风迷了眼

  • chapter 9

    1. 内容## 本章讲了一些代码调优的技巧 1.1 一些调优方法### 整数取模(取模运算时间大于加减法执行时间)...

  • Chapter 9

    这一部分在Oceania was at war with Eastasia中展开! ally vt.1[ally ...

  • chapter 9

    陆薇和夏一诺收拾好下楼的时候大厅里已经有好多人在那了吃早饭了。陆薇望了一眼外面已经不下雨了,可能因为年代久远的关系...

  • Chapter 9

    爱自己 非暴力沟通最重要的应用也许在于-爱护自己。如何培养对自己的爱呢?转变自我评价的方式是一个重要方式。如果自我...

  • Chapter 9

    Employ 使忙碌 Entreaty 乞求 恳求 mix more in 更多融入 showery 阵雨的 so...

  • Chapter 9

    第一幕 人物 湘琴 直树 子瑜 皓谦 事件 牵手逃跑 皓谦想演一出英雄救美人,结果认错人,被人打。慌忙之...

  • Chapter 9

    Chapter 9: On-policy Prediction with Approximation From t...

网友评论

      本文标题:Chapter 9

      本文链接:https://www.haomeiwen.com/subject/hzadaktx.html