作业再认真分析,其实需要实现的内容就红色框圈出来的部分,其中:
- 显示目标模型
- 标准化输入的特征
这个相对简单,比较难的是这三部分:
- 损失函数
- 梯度下降
- 最小二乘法
代码分析
损失函数
def compute_cost(x, y, theta):
m = len(y)
J = np.sum(np.square(x.dot(theta) - y)) / (2.0 * m)
return J
def compute_cost_multi(X, y, theta):
m = len(y)
diff = X.dot(theta) - y
J = 1.0 / (2 * m) * diff.T.dot(diff)
return J
梯度下降
def gradient_descent(X, y, theta, alpha, num_iters):
m = len(y)
J_history = np.zeros(num_iters)
for i in range(num_iters):
theta -= alpha / m * ((X.dot(theta) - y).T.dot(X))
J_history[i] = compute_cost(X, y, theta)
return theta, J_history
def gradient_descent_multi(X, y, theta, alpha, num_iters):
m = len(y)
J_history = np.zeros(num_iters)
for i in range(num_iters):
theta -= alpha / m * ((X.dot(theta) - y).T.dot(X))
J_history[i] = compute_cost_multi(X, y, theta)
return theta, J_history
最小二乘法
def normal_eqn(X, y):
theta = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(y)
return theta
深有体会,这种写实现过程:
- 先学会矩阵的运算和python里面的一些基本的矩阵操作
- 要先搞懂原理,需要做什么。
- 具体实现
网友评论