美文网首页
Divid Silver RL课程的记录

Divid Silver RL课程的记录

作者: best___me | 来源:发表于2018-07-05 16:44 被阅读0次

    1. Introduction

    强化学习的特点

    1. 没有supervisor,只有reward signal  2. feedback是有延迟的,不是瞬时的。  3. 时间更重要,时序的  4. Agent的action影响后续它将接收到的data

    Reward假设

    所有目标都可以被描述为maximisation of expected cumulative reward。

    一些概念

    policy是agent的行为

    Value function is a prediction of future reward.

    A model predicts what the environment will do next. 预测next state,预测reward

    分类RL agents

    ----value based(no policy, value function); policy based(policy, no value function):Actor Critic(policy, value function)

    ----Model Free(policy and/or value function, no model);Model Based(policy and/or value function, model)

    Learning and Planning

    ---Learning: environment最开始是不知道的,agent与环境交互,agent improves它的策略

    ---Planning:environment的model是已知的,agent performs computations with its model(without any external interaction),agent improves its policy

    Exploration and Exploitation

    RL像是trail-and-error learning。exploration找到关于环境的更多信息;exploitation利用已知的信息最大化reward。

    Prediction and Control

    prediction---evaluate the future(given a policy);  Control---optimise the future(find the best policy)

    2. Markov Decision Process

    MDP描述了一个environment,并且这个environment是完全被观测的。一个Markov process是一个元组<S, P>,S是有限个数的states集,P是state转移概率矩阵。

    Markov Reward Process---Value function

    v(s)给出了state s的long-term value Reture

    Markov Decision Process

    是一个Markov reward process,但是具有decision,即action

    Policy

    A policy π 是给定states后actions的分布

    Value Function

    state-value function

    state-value function:从state s开始,遵从policy π的expected return

    action-value function

    action-value function:从state s开始,执行action a,并且遵从policy π后的expected return

    optimal policy

    define a partial ordering over policies

    3. Planning by Dynamic Programming

    Dynamic Programming可以用于解决具有以下两个特点的问题:1. 最优子结构 2. 重叠的子问题

    Iterative Policy Evaluation

    任务:evaluate a given policy π,解决方法:iterative application of Bellman expectation backup。backup分为synchronous同步和异步的

    backup

    Policy Iteration

    包含两步:policy evaluation和policy improvement

    Value Iteration

    任务:找到最优的policy π,解决方法:iterative application of Bellman optimality backup

    总结:

    4. Model-Free Prediction

    未知 MDP,Monte-Carlo Reinforcement Learning从经验学习

    First-Visit Monte-Carlo Policy Evaluation

    任务:evaluate state s,first time-step t that state s is visited in an episode

    Every-Visit Monte-Carlo Policy Evaluation

    任务:evaluate state s,every time-step t that state s is visited in an episode

    Temporal-Difference Learning

    learn directly from episodes of experience,是model-free的,没有MDP transitions或者rewards。从不完整的episodes学习。TD updates a guess towards a guess

    比较MC和TD

    TD可以在知道最后的outcome之前学习,TD可以在每一步完成后online的学习,而MC必须等到结束知道return。

    TD可以在没有最终的outcome时学习,可以根据不完整的序列学习,而MC只能根据完整的序列学习。

    MC具有高variance变化,0 bias,对初始value不敏感。而TD具有低variance变化,some bias,更高效,TD(0)可以收敛到vπ(s),对初始值更敏感。

    TD利用Markov特性,在Markov环境中更有效;MC没有,在non-Markov环境中更有效。

    MC:最小化mean-squared error。TD:max likelihood Markov model

    Bootstrapping and Sampling

    Unified View of Reinforcement Learning

    相关文章

      网友评论

          本文标题:Divid Silver RL课程的记录

          本文链接:https://www.haomeiwen.com/subject/nzjpuftx.html