线性回归一般用于预测:比如股票涨停
梯度下降是机器学习中最核心的优化算法
image.png image.png
函数解析:
np.random.normal(size,loc,scale):
给出均值为loc,标准差为scale的高斯随机数
1.参数loc(float):正态分布的均值,对应着这个分布的中心。loc=0说明这一个以Y轴为对称轴的正态分布,
2.参数scale(float):正态分布的标准差,对应分布的宽度,scale越大,正态分布的曲线越矮胖,scale越小,曲线越高瘦。
3.参数size(int 或者整数元组):输出的值赋在shape里,默认为None。
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
#生成100个随机数点
#方程为 y=0.1*x+0.2
#权重 0.1 偏差0.2
points = 100
vectors = []
for i in range(points):
x1 = np.random.normal(0.0,0.66)
y1 = x1*0.1 +0.2 +np.random.normal(0.0,0.04)
vectors.append([x1,y1])
x_data = [v[0] for v in vectors]
y_data = [v[1] for v in vectors]
plt.plot(x_data,y_data,'r*',label="Original data")
# plt.title('Interesting Graph',color='blue')
plt.legend()
plt.show()
此时画出来的图像如下:
image.png
接下来需要通过训练得出拟合直线,越接近 y=0.1x+0.2最好
w为-1到1的单随机数组
b为初始化为[0]的数组
loss为损失函数,均方差
梯度下降法是一个一阶最优化算法,通常也称为最速下降法。要使用梯度下降法找到一个函数的局部极小值,必须向函数上当前点对于梯度(或者是近似梯度)的反方向的规定步长距离点进行迭代搜索。所以梯度下降法可以帮助我们求解某个函数的极小值或者最小值。对于n维问题就最优解,梯度下降法是最常用的方法之一。
关于梯度下降学习率的设置参考:
https://segmentfault.com/a/1190000011994447
https://blog.csdn.net/william_hehe/article/details/78658421?locationNum=9&fps=1
https://vimsky.com/article/3788.html
https://cloud.tencent.com/developer/ask/116497/answer/209443
w = tf.Variable(tf.random_uniform([1],-1.0,1.0))
b = tf.Variable(tf.zeros([1]))
y= w*x_data+b
#损失函数
loss=tf.reduce_mean(tf.square(y-y_data))
#梯度下降优化
optimizer=tf.train.GradientDescentOptimizer(0.5)
train=optimizer.minimize(loss)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for step in range(20):
sess.run(train)
print("step:%d,loss:%f,weight:%f,bias:%f"%(step ,sess.run(loss),sess.run(w),sess.run(b)))
plt.plot(x_data,y_data,'r*',label="Original data")
plt.title('Interesting Graph',color='blue')
plt.plot(x_data,x_data*sess.run(w)+sess.run(b),label="Fittedd line")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
sess.close()
画出的图像如下:
image.png
下面给出一种自适应学习率的方法:
主要用到的函数如下
learning_rate = tf.train.exponential_decay(
0.1, # Base learning rate. 基础的学习率
global_steps, # Current index into the dataset.用于迭代
10, # Decay step.衰减步长
0.95, # Decay rate.这两个参数的意思是每隔10次迭代就把基础学习率*0.95
staircase=True)
w = tf.Variable(tf.random_uniform([1],-1.0,1.0))
b = tf.Variable(tf.zeros([1]))
y= w*x_data+b
loss=tf.reduce_mean(tf.square(y-y_data))
global_steps = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(
0.1, # Base learning rate.
global_steps, # Current index into the dataset.
10, # Decay step.
0.95, # Decay rate.
staircase=True)
optimizer=tf.train.GradientDescentOptimizer(learning_rate)
train=optimizer.minimize(loss,global_step=global_steps)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for step in range(40):
sess.run(train)
print("step:%d,loss:%f,weight:%f,bias:%f"%(step ,sess.run(loss),sess.run(w),sess.run(b)))
print(sess.run(learning_rate))
plt.plot(x_data,y_data,'r*',label="Original data")
plt.title('Interesting Graph',color='blue')
plt.plot(x_data,x_data*sess.run(w)+sess.run(b),label="Fittedd line")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
sess.close()
打印结果如下:
step:0,loss:0.063258,weight:0.449633,bias:0.038551
0.1
step:1,loss:0.050180,weight:0.428209,bias:0.069480
0.1
step:2,loss:0.040675,weight:0.408036,bias:0.094305
0.1
step:3,loss:0.033597,weight:0.389057,bias:0.114242
0.1
step:4,loss:0.028194,weight:0.371216,bias:0.130264
0.1
step:5,loss:0.023971,weight:0.354453,bias:0.143150
0.1
step:6,loss:0.020598,weight:0.338712,bias:0.153524
0.1
step:7,loss:0.017851,weight:0.323938,bias:0.161883
0.1
step:8,loss:0.015578,weight:0.310076,bias:0.168626
0.1
step:9,loss:0.013672,weight:0.297074,bias:0.174075
0.095
step:10,loss:0.012133,weight:0.285492,bias:0.178262
0.095
step:11,loss:0.010808,weight:0.274598,bias:0.181697
0.095
step:12,loss:0.009660,weight:0.264353,bias:0.184518
0.095
step:13,loss:0.008661,weight:0.254720,bias:0.186841
0.095
step:14,loss:0.007788,weight:0.245663,bias:0.188758
0.095
step:15,loss:0.007024,weight:0.237150,bias:0.190343
0.095
step:16,loss:0.006353,weight:0.229147,bias:0.191658
0.095
step:17,loss:0.005764,weight:0.221626,bias:0.192752
0.095
step:18,loss:0.005246,weight:0.214558,bias:0.193666
0.095
step:19,loss:0.004790,weight:0.207915,bias:0.194432
0.09025
step:20,loss:0.004407,weight:0.201985,bias:0.195045
0.09025
step:21,loss:0.004068,weight:0.196395,bias:0.195567
0.09025
step:22,loss:0.003767,weight:0.191126,bias:0.196014
0.09025
step:23,loss:0.003500,weight:0.186159,bias:0.196399
0.09025
step:24,loss:0.003263,weight:0.181478,bias:0.196732
0.09025
step:25,loss:0.003053,weight:0.177065,bias:0.197020
0.09025
step:26,loss:0.002866,weight:0.172906,bias:0.197272
0.09025
step:27,loss:0.002700,weight:0.168986,bias:0.197493
0.09025
step:28,loss:0.002553,weight:0.165291,bias:0.197688
0.09025
step:29,loss:0.002422,weight:0.161809,bias:0.197860
0.0857375
step:30,loss:0.002312,weight:0.158691,bias:0.198005
0.0857375
step:31,loss:0.002213,weight:0.155743,bias:0.198136
0.0857375
step:32,loss:0.002125,weight:0.152957,bias:0.198254
0.0857375
step:33,loss:0.002046,weight:0.150323,bias:0.198361
0.0857375
step:34,loss:0.001975,weight:0.147833,bias:0.198458
0.0857375
step:35,loss:0.001912,weight:0.145479,bias:0.198547
0.0857375
step:36,loss:0.001856,weight:0.143254,bias:0.198628
0.0857375
step:37,loss:0.001806,weight:0.141150,bias:0.198703
0.0857375
step:38,loss:0.001761,weight:0.139162,bias:0.198772
0.0857375
step:39,loss:0.001721,weight:0.137282,bias:0.198835
0.08145062
网友评论