美文网首页
[Chapter 2] Value Iteration and

[Chapter 2] Value Iteration and

作者: 超级超级小天才 | 来源:发表于2021-05-28 23:02 被阅读0次

We now know the most important thing for computing an optimal policy is to compute the value function. But how? (The following contents are all based on infinite horizon problems.) The solution to this problem can be roughly divided into two categories: Value Iteration and Policy Iteration.

Value Iteration

Based on the formula of expected utility with discounted factor:

U^{\pi}(s)=E[\sum_{t=1}^{\infty}{\gamma^t R(s_t)}]

and the transition function P(s^′|s,a) defined in MDP model, there is an equation for the value function to intutively statisfy:

V(s)=R(s)+\gamma max_{a \in A(s)}⁡\sum_{s^′}{P(s^′│s,a)V(s^′)}

which is called Bellman equation.

Unfortunately, it's very difficult to solve the Bellman euqation, since there is a max operator, so that's why we need value iteration method.

Based on the Bellman equation, we can get the Bellman updata:

U_{t+1}(s) \leftarrow R(s)+\gamma max_{a \in A(s)}⁡ \sum_{s^′}{P(s^′│s,a)U_t(s^′)}

Where t represents the iteration time steps.

The value iteration algorithm can be described as following:

image

We can initialize all utilities for all states as 0, and using the Bellman update formula to compute new utilities step by step until it converges (all values reach unique fixed points. This will save much more time than solve the Bellman equations directly.

Policy Iteration

Now, think about this, we are updating the values/utilities for each state in the first method, but in policy iteration, we initialize and update the policies. This is based on that sometimes, to find the optimal policy, we don't really need to find the highly accurate value function, e.g. if one action is clearly better than others. So we give up computing the values using Bellman update, we initialize the policies as {\pi}_0 and then update them. To do this, we alternate the following two main steps:

  • Policy evaluation: given the current policy {\pi}_t, calculate the next-step utilities U_{t+1}:

U_{t+1}(s) \leftarrow R(s)+\gamma \sum_{s^′}{P(s^′│s,{\pi}_t(s))U_t(s^′)}

This update formula is similar but simpler than Bellman update formula, there is no need to compute the maximum value among all possible actions, but using the actions given by the policy at time step t.

  • Policy inrovement: Using the calculated one step look-ahead utilities U_{t+1}(s) to compute new policy {\pi}_{t+1}.

To improve the policy, we need to choose another better policy to replace the current one, to do so, we need to introduce the action-value function or Q-function for policy {\pi}:

Q^{\pi}(s,a)=R(s)+\sum_{s^′}{P(s^′│s,a)U^{\pi}(s^′)}

The main different for Q-function in comparison to the value function is that the Q-function is the expected utility after determining the exact action a. Suppose the size of the state space and action space is |S| and |A| respectively, then for each policy {\pi}, there will be |S| value functions, one for each state, and will be |S| \times |A| Q-functions, one for each state-action pair.

The policy improvement theorem can be very easy then: suppose a new policy {\pi}′ that for all s \in S:

Q^{\pi}(s,{\pi}′(s)) \geq U^{\pi}(s)

Then for all s \in S, there will be:

U^{\pi}′(s) \geq U^{\pi}(s)

In this case, we can say that {\pi}′ is better than {\pi}, so we can improve the policy from {\pi} to {\pi}′.

Alternatively do the above two steps until there is no change in policy, we can get the optimal policy, this is the policy iteration algorithm, describing as following:

image

More Accurate Policy Iteration Methods

The policy iteration method stated above with both doing policy evaluation and policy improvement one step is called generalized policy iteration, which is a very general and common method. However, there are some other more accurate methods based on more accurate policy evaluation.

  • For each step of policy evaluation, not only update the utilities one time step, but using the following set of equations to solve the accurate utilities:

U_t(s)=R(s)+\gamma \sum_{s^′}{P(s^′│s,{\pi}_t(s))U_t(s^′)}

For N states, there are N equations and they can be solved in O(N^3) time using some basic linear algebra knowledge. This will be much more complex but much more accurate, however, it's a bit too much for almost all problems, we ususlly don't need the most accurate utilities.

  • Another method does k steps iterations in the policy evaluation step instead of to convergence or only one step, which is called modified policy iteration. k can be defined according to the environment and the problem.

相关文章

网友评论

      本文标题:[Chapter 2] Value Iteration and

      本文链接:https://www.haomeiwen.com/subject/atessltx.html