美文网首页
强化学习

强化学习

作者: ZhSong | 来源:发表于2020-04-13 06:24 被阅读0次

    Reinforcement Learning

    • What is Reinforcement Learning
    • Why Reinforcement Learning
    • Basics of Reinforcement Learning
    • Inside an RL agent

    什么是强化学习

    RL is an agnet learning to interact with an environmnet based on feedback signal (reward) ie receives from the environment, in order to achieve a goal.

    也就是通过环境的反馈来学习,最终达到目标,就像训练一只狗,当做了对的事情就给他好吃的,错的事情就打一巴掌。

    强化学习是机器学习的一部分,他与监督学习和非监督学习并列,供同行组成机器学习

    • Data:Sparce and time-dalay reward
    • Way to learn:Learn through interaction with environment, learn from scratch
    • Goal: Maximise future rewards

    为什么要使用强化学习

    • Learn from scratch,no need of training data
    • Go beyond the level of human being

    机器学习的基础:奖励

    • A reward R_t is an immediate feedback signal
    • Indicates how well the agent is doing at step t
    • The agent's job is to maximise cumulative(累计的) reeward

    为了获取最大奖励

    • Reward may be delayed
    • Actions may have long term consequences
    • It may be better to sacrificeI(牺牲) immediate reward to gain more long-term reward

    机器代理,环境和状态

    • At each step t the agent:
      • Receives observation O_t
      • Receives reward R_t
      • Executes action A_t
    • The environment:
      • Receives action A_t
      • Emits(发出) observation O_{t+1}
      • Emits(发出) reward R_{t+1}
    • State S_t
      • Summary information used to determine what happens next
      • Markov: future depends only on the present, independent of the past(未来仅取决于现在而不取决于过去)

    部分可观测环境(Partial observability)

    • Partial observability: agent indirectly observes environment
      • A robot with camera vision is not told its absolute location
      • A pocker player agent only obseves public cards

    强化学习代理内部(Inside an RL agent)

    • Policy: agent's behaviour function
    • Value function: huow good is a state

    Policy

    • A policy is a map from state to aciton

      • Deterministic policy(决定性策略): a = \pi(s)
      • Stochstic policy(随机策略): \pi(a|s) = P[A = a | S = s] (在状态s下发生a动作的概率)
    • Example:

      • Arrows represent the policy \pi(s) for each state a
    截屏2020-04-13下午8.58.18.png

    Value Function

    • Value function is a prediction of future reward

    • Used to evaluate the goodness of a state

      V_\pi(s) = E_\pi[R_t + \gamma R_{t+1} + \gamma^2 R_{t+2}+...|S_t=s]

      Where \gamma(0\leq \gamma \le 1) gives an option to discount future reward

    • Example

      • Number represents the value V(s) for each state s.
    截屏2020-04-13下午8.58.23.png

    不同的强化学习代理(Different RL agents)

    • Value-based agent
      • Value function(价值方程)
      • Inplicit policy(隐含策略)
    • Policy-based agent
      • Policy
      • No value function

    Q-Learning

    • Value function is the expected future reward at a given state(价值方程是在某个位置的时候期待奖励)

    • Used to evaluate the goodness of state(价值函数用于评价一个状态的好坏)

      V_\pi(s_t)=E_\pi[R_t+\gamma R_{t+1} + \gamma^2 R_{t+2}+...|s_t,a_t]

    • Q-learning is to learn a particular function:Q-funtion(i.e. Action Value function)(Q-learning是学习一个特定的函数)

    • Q-function is the expected future reward at a given state when taking a particular action(Q-funtion是在某个位置的时候做某种动作所期望的奖励)

    • Used to evaluate the goodness of a pair of state and action(用来评价一对状态和动作的好坏)Q_\pi(s_t,a_t)=E_\pi[R_t+\gamma R_{t+1} + \gamma^2 R_{t+2}+...|s_t,a_t]

    Q-Table

    • Q-learning is to build a score table to record Q value for each action at each state(Q-learning 会建立一个表格来记录每一个状态的Q值)

    Bellman Equation

    Q(s_t,a_t)=(1-\alpha)Q(s_t,a_t)+\alpha[R(s_t,a_t)+\gamma \mathop{max}\limits_{a}Q(s_{t+1},a)]

    截屏2020-04-13下午9.00.58.png

    Q-Learning Algorithm

    1. Initialise Q table

    2. For each episode
      a. Select a random initial state
      b. Do

    • Select on action(e.g. randomly)

    • Perform that action and then go to next state

    • Get the reward

    • Update Q(s_t,a_t)=(1-\alpha)Q(s_t,a_t)+\alpha[R(s_t,a_t)+\gamma \mathop{max}\limits_{a}Q(s_{t+1},a)]

      End Do While the goal state is reached

    End For

    Summary:Q-Learning

    • Q-learning evaluates which action to take based on Q table that determines value of being in a certain state and taking a certain action at that state.(Q学习基于Q表来评估采取哪种行动,该表确定处于某种状态并在该状态下采取某种行动的价值。)
    • Q table is updated iteratively as agent playing games by using the Bellman Equation
    • Before exploring the environment Q table gives the same arbitrary fixed value(e.g.zero), but as time goes by Q table gives us a better and better approximation.(在探索环境之前,Q表会给出相同的任意固定值(例如零),但是随着时间的流逝,Q表会给出越来越好的近似值)

    Deep Q Network

    用于解决有大量状态的问题,因为在Q-learning中无法更新Q-table

    对于深度Q学习

    • 输入: State

    • 输出:每个动作对应的Q-value

    深度Q学习缺点

    • Cannot handle continuous action spaces(无法处理连续动作的问题)

    • Cannot learn stochastic policies since policy is determinbistically computed from the Q function(无法学习随机策略因为计算式基于Q函数的)

    Learn polilcy directly(直接学习策略)

    • Value function: how good is an action at a given state

    • Policy: agent's behaviour function ​

    Policy Gradient(策略下降)

    • Deep Q-learning:Approximating Q(s,a) and inferring policy(逼近Q(s,a)并推断策略)

    • Policy Gradient:Directly estimating policy ​(直接建立一个策略)

    Basic idea(基本思路)

    1. Start out with an arbitrary random policy(从一个任意的策略开始)

    2. Play game for a while and sample some actions(尝试一些动作)

    3. Increase probability of actions that lead to high reward, and decrease probability of actions that lead to low reward(增加导致高回报动作的可能性并减少低回报动作的可能性)

    Find best policy: Two steps(找到最好的策略的两个步骤)

    • Measure the quality of ​(​ represents the network, by defining a score function ​)

    • Use policy gradient ascent to update the parameters in ​ to improve ​

      Step1: To measure policy

      • Store all transitions​ in all episodes when the agent played based on current policy ​

        截屏2020-04-13下午8.23.02

        Step2: Policy gradient ascent

        • The idea is to find the gradient to the current policy ​ and updates the parameters in the direction of the greatest increase.(找到当前策略的梯度,并且在增加最快的方向上更新参数)

          • Policy: ​

          • Objective function: ​

          • Gradient: ​

          • Update : ​

            Important issuses: Model-based RL(重要问题,基于模型的强化学习)

            • Deep Q-learning and Polivy gradient are model-free algorithm

            • A model predicts what the environment will do next

            • A model RL has twp parts:

              1. to predict the next state

              2. to predict the next immediate reward

            Actor Critic

            • Policy Gradient may converge slower than Deep Q-learning; it can take longer to train, and need more data.(策略梯度的收敛可能会比深度Q学习慢,而且会需要更多的数据和更长的时间去学习)

            • Actor Critic; a hybrid between value-based learning and policy-based learning(是一种介于两者之间的方法)

            Exploration vs Exploitation(探索和开发)

            Reinforcement learning is trial-and-error learning; the agent should discover a good policy from its experience interacting with environment

            • Exploration find more informarion about environment(探索是找到更多关于环境的信息)

            • Exploitation exploits known information to maximise the reward(尽可能多的找到最大回报的信息)

            ​- greedy exploration(贪婪搜索)

            截屏2020-04-13下午8.42.15

            Credit assignment problem

            • In RL we assume that since we lost the episode, all of the actions we took there must be bad actions and it is going to reduce the likelihood of those actions in the future

            • However, probably most part of the episode we are doing very well

            • Credit assignment problem: What action that leads to reward we get in the future

            • Sparse reward setting: we only get a reward after entire episode

            Style of play

            Learning outcome

            • What is policy gradient

            • Why do we need policy gradient(vs Q-learning)

            • How to find best policy in policy gradient

            • Important issues in RL

              • Model-based RL

              • Actor Critic

              • Exploration and exploitation

              • Credit assignment problem

              • Style of play in RL

    相关文章

      网友评论

          本文标题:强化学习

          本文链接:https://www.haomeiwen.com/subject/twjwmhtx.html