楔子:
从自己学习走路的机器人狗到自己学习围棋的AlphaGo无不用到强化学习。
在分类算法中我们通过点击广告的用户与实际购买SUV车的用户分析,得到购买SUV车的人群特征。但是假如广告部给我们多个广告方案,我们该如何投放呢?在探索哪个广告点击率最高的过程中,我们希望探索过程中的点击率达到最大,就像多臂老虎机问题,我们希望在探索哪个老虎机的奖励最多的同时,希望得到的奖励最多。
1、随机投放算法
代码:
# Random Selection
# Importing the dataset
dataset = read.csv('Ads_CTR_Optimisation.csv')
# Implementing Random Selection
N = 10000
d = 10
ads_selected = integer(0)
total_reward = 0
for (n in 1:N) {
ad = sample(1:10, 1)
ads_selected = append(ads_selected, ad)
reward = dataset[n, ad]
total_reward = total_reward + reward
}
# Visualising the results
hist(ads_selected,
col = 'blue',
main = 'Histogram of ads selections',
xlab = 'Ads',
ylab = 'Number of times each ad was selected')
模拟数据集.PNG
在模拟数据集中我们得到用户点击广告的意愿数据。
随机投放.PNG
在随机投放的过程中,我们得到每个广告的投放次数与总的点击次数。
直方图.PNG
2、UCB算法
置信区间上界算法步骤.PNG程序:
# Upper Confidence Bound
# Importing the dataset
dataset = read.csv('Ads_CTR_Optimisation.csv')
# Implementing UCB
N = 10000
d = 10
ads_selected = integer(0)
numbers_of_selections = integer(d)
sums_of_rewards = integer(d)
total_reward = 0
for (n in 1:N) {
ad = 0
max_upper_bound = 0
for (i in 1:d) {
if (numbers_of_selections[i] > 0) {
average_reward = sums_of_rewards[i] / numbers_of_selections[i]
delta_i = sqrt(3/2 * log(n) / numbers_of_selections[i])
upper_bound = average_reward + delta_i
} else {
upper_bound = 1e400
}
if (upper_bound > max_upper_bound) {
max_upper_bound = upper_bound
ad = i
}
}
ads_selected = append(ads_selected, ad)
numbers_of_selections[ad] = numbers_of_selections[ad] + 1
reward = dataset[n, ad]
sums_of_rewards[ad] = sums_of_rewards[ad] + reward
total_reward = total_reward + reward
}
# Visualising the results
hist(ads_selected,
col = 'blue',
main = 'Histogram of ads selections',
xlab = 'Ads',
ylab = 'Number of times each ad was selected')
UCB统计数据.PNG
这里我们不仅统计了每个广告的投放次数,还统计了每个广告的点击率,总点击率,可以比较点击率最高的广告是不是投放次数最多的广告。
这里可以知道append方法是给integer添加列向量。
UCB直方图.PNG
网友评论