Reinforcement Learning
- What is Reinforcement Learning
- Why Reinforcement Learning
- Basics of Reinforcement Learning
- Inside an RL agent
什么是强化学习
RL is an agnet learning to interact with an environmnet based on feedback signal (reward) ie receives from the environment, in order to achieve a goal.
也就是通过环境的反馈来学习,最终达到目标,就像训练一只狗,当做了对的事情就给他好吃的,错的事情就打一巴掌。
强化学习是机器学习的一部分,他与监督学习和非监督学习并列,供同行组成机器学习
- Data:Sparce and time-dalay reward
- Way to learn:Learn through interaction with environment, learn from scratch
- Goal: Maximise future rewards
为什么要使用强化学习
- Learn from scratch,no need of training data
- Go beyond the level of human being
机器学习的基础:奖励
- A reward is an immediate feedback signal
- Indicates how well the agent is doing at step t
- The agent's job is to maximise cumulative(累计的) reeward
为了获取最大奖励
- Reward may be delayed
- Actions may have long term consequences
- It may be better to sacrificeI(牺牲) immediate reward to gain more long-term reward
机器代理,环境和状态
- At each step t the agent:
- Receives observation
- Receives reward
- Executes action
- The environment:
- Receives action
- Emits(发出) observation
- Emits(发出) reward
- State
- Summary information used to determine what happens next
- Markov: future depends only on the present, independent of the past(未来仅取决于现在而不取决于过去)
部分可观测环境(Partial observability)
- Partial observability: agent indirectly observes environment
- A robot with camera vision is not told its absolute location
- A pocker player agent only obseves public cards
强化学习代理内部(Inside an RL agent)
- Policy: agent's behaviour function
- Value function: huow good is a state
Policy
-
A policy is a map from state to aciton
- Deterministic policy(决定性策略):
- Stochstic policy(随机策略): (在状态s下发生a动作的概率)
-
Example:
- Arrows represent the policy for each state
Value Function
-
Value function is a prediction of future reward
-
Used to evaluate the goodness of a state
Where gives an option to discount future reward
-
Example
- Number represents the value for each state s.
不同的强化学习代理(Different RL agents)
- Value-based agent
- Value function(价值方程)
- Inplicit policy(隐含策略)
- Policy-based agent
- Policy
- No value function
Q-Learning
-
Value function is the expected future reward at a given state(价值方程是在某个位置的时候期待奖励)
-
Used to evaluate the goodness of state(价值函数用于评价一个状态的好坏)
-
Q-learning is to learn a particular function:Q-funtion(i.e. Action Value function)(Q-learning是学习一个特定的函数)
-
Q-function is the expected future reward at a given state when taking a particular action(Q-funtion是在某个位置的时候做某种动作所期望的奖励)
-
Used to evaluate the goodness of a pair of state and action(用来评价一对状态和动作的好坏)
Q-Table
- Q-learning is to build a score table to record Q value for each action at each state(Q-learning 会建立一个表格来记录每一个状态的Q值)
Bellman Equation
截屏2020-04-13下午9.00.58.pngQ-Learning Algorithm
-
Initialise Q table
-
For each episode
a. Select a random initial state
b. Do
-
Select on action(e.g. randomly)
-
Perform that action and then go to next state
-
Get the reward
-
Update
End Do While the goal state is reached
End For
Summary:Q-Learning
- Q-learning evaluates which action to take based on Q table that determines value of being in a certain state and taking a certain action at that state.(Q学习基于Q表来评估采取哪种行动,该表确定处于某种状态并在该状态下采取某种行动的价值。)
- Q table is updated iteratively as agent playing games by using the Bellman Equation
- Before exploring the environment Q table gives the same arbitrary fixed value(e.g.zero), but as time goes by Q table gives us a better and better approximation.(在探索环境之前,Q表会给出相同的任意固定值(例如零),但是随着时间的流逝,Q表会给出越来越好的近似值)
Deep Q Network
用于解决有大量状态的问题,因为在Q-learning中无法更新Q-table
对于深度Q学习
-
输入: State
-
输出:每个动作对应的Q-value
深度Q学习缺点
-
Cannot handle continuous action spaces(无法处理连续动作的问题)
-
Cannot learn stochastic policies since policy is determinbistically computed from the Q function(无法学习随机策略因为计算式基于Q函数的)
Learn polilcy directly(直接学习策略)
-
Value function: how good is an action at a given state
-
Policy: agent's behaviour function
Policy Gradient(策略下降)
-
Deep Q-learning:Approximating Q(s,a) and inferring policy(逼近Q(s,a)并推断策略)
-
Policy Gradient:Directly estimating policy (直接建立一个策略)
Basic idea(基本思路)
-
Start out with an arbitrary random policy(从一个任意的策略开始)
-
Play game for a while and sample some actions(尝试一些动作)
-
Increase probability of actions that lead to high reward, and decrease probability of actions that lead to low reward(增加导致高回报动作的可能性并减少低回报动作的可能性)
Find best policy: Two steps(找到最好的策略的两个步骤)
-
Measure the quality of ( represents the network, by defining a score function )
-
Use policy gradient ascent to update the parameters in to improve
Step1: To measure policy
-
Store all transitions in all episodes when the agent played based on current policy
截屏2020-04-13下午8.23.02Step2: Policy gradient ascent
-
The idea is to find the gradient to the current policy and updates the parameters in the direction of the greatest increase.(找到当前策略的梯度,并且在增加最快的方向上更新参数)
-
Policy:
-
Objective function:
-
Gradient:
-
Update :
Important issuses: Model-based RL(重要问题,基于模型的强化学习)
-
Deep Q-learning and Polivy gradient are model-free algorithm
-
A model predicts what the environment will do next
-
A model RL has twp parts:
-
to predict the next state
-
to predict the next immediate reward
-
Actor Critic
-
Policy Gradient may converge slower than Deep Q-learning; it can take longer to train, and need more data.(策略梯度的收敛可能会比深度Q学习慢,而且会需要更多的数据和更长的时间去学习)
-
Actor Critic; a hybrid between value-based learning and policy-based learning(是一种介于两者之间的方法)
Exploration vs Exploitation(探索和开发)
Reinforcement learning is trial-and-error learning; the agent should discover a good policy from its experience interacting with environment
-
Exploration find more informarion about environment(探索是找到更多关于环境的信息)
-
Exploitation exploits known information to maximise the reward(尽可能多的找到最大回报的信息)
- greedy exploration(贪婪搜索)
截屏2020-04-13下午8.42.15Credit assignment problem
-
In RL we assume that since we lost the episode, all of the actions we took there must be bad actions and it is going to reduce the likelihood of those actions in the future
-
However, probably most part of the episode we are doing very well
-
Credit assignment problem: What action that leads to reward we get in the future
-
Sparse reward setting: we only get a reward after entire episode
Style of play
Learning outcome
-
What is policy gradient
-
Why do we need policy gradient(vs Q-learning)
-
How to find best policy in policy gradient
-
Important issues in RL
-
Model-based RL
-
Actor Critic
-
Exploration and exploitation
-
Credit assignment problem
-
Style of play in RL
-
-
-
-
-
网友评论