美文网首页
Deeplearning.ai Course-2 Week-1

Deeplearning.ai Course-2 Week-1

作者: _刘某人_ | 来源:发表于2017-09-27 21:41 被阅读0次

前言:

文章以Andrew Ng 的 deeplearning.ai 视频课程为主线,记录Programming Assignments 的实现过程。相对于斯坦福的CS231n课程,Andrew的视频课程更加简单易懂,适合深度学习的入门者系统学习!

本次作业主要讲到Gradient Checking方法,使用这个方法能够较早的发现梯度计算问题,检验梯度计算是否正确,从而保证程序能够正确的执行。

其实梯度检查的主要思想就是高数中导数的定义,利用无线逼近的方法判断程序梯度计算是否存在问题:

1.1 1-dimensional gradient checking

梯度检查的主要步骤如下:

代码如下:

def forward_propagation(x, theta):

J=theta*x

return J

def backward_propagation(x, theta):

dtheta=x

return dtheta

def gradient_check(x, theta, epsilon = 1e-7):

thetaplus = theta+epsilon                              

thetaminus = theta-epsilon                            

J_plus = thetaplus*x                                

J_minus = thetaminus*x                                

gradapprox = (J_plus-J_minus)/(2*epsilon)                             

grad = x

numerator = np.linalg.norm(gradapprox-grad)                             

denominator = np.linalg.norm(gradapprox)+np.linalg.norm(grad)                            

difference = numerator/denominator                              

if difference < 1e-7:

print ("The gradient is correct!")

else:

print ("The gradient is wrong!")

return difference

1.2 N-dimensional gradient checking

def forward_propagation_n(X, Y, parameters):

m = X.shape[1]

W1 = parameters["W1"]

b1 = parameters["b1"]

W2 = parameters["W2"]

b2 = parameters["b2"]

W3 = parameters["W3"]

b3 = parameters["b3"]

Z1 = np.dot(W1, X) + b1

A1 = relu(Z1)

Z2 = np.dot(W2, A1) + b2

A2 = relu(Z2)

Z3 = np.dot(W3, A2) + b3

A3 = sigmoid(Z3)

logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)

cost = 1./m * np.sum(logprobs)

cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)

return cost, cache

def backward_propagation_n(X, Y, cache):

m = X.shape[1]

(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache

dZ3 = A3 - Y

dW3 = 1./m * np.dot(dZ3, A2.T)

db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)

dA2 = np.dot(W3.T, dZ3)

dZ2 = np.multiply(dA2, np.int64(A2 > 0))

dW2 = 1./m * np.dot(dZ2, A1.T) * 2

db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)

dA1 = np.dot(W2.T, dZ2)

dZ1 = np.multiply(dA1, np.int64(A1 > 0))

dW1 = 1./m * np.dot(dZ1, X.T)

db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)

gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,

"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,

"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}

return gradients

def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):

parameters_values, _ = dictionary_to_vector(parameters)

grad = gradients_to_vector(gradients)

num_parameters = parameters_values.shape[0]

J_plus = np.zeros((num_parameters, 1))

J_minus = np.zeros((num_parameters, 1))

gradapprox = np.zeros((num_parameters, 1))

for i in range(num_parameters):

thetaplus = np.copy(parameters_values,True)                                      

thetaplus[i,:] = thetaplus[i,:]+epsilon                                                      

J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus))                                

thetaminus = np.copy(parameters_values,True)                                   

thetaminus[i,:] = thetaminus[i,:]-epsilon                             

J_minus[i], _ = forward_propagation_n(X,Y,vector_to_dictionary(thetaminus))                                  

gradapprox[i] = (J_plus[i]-J_minus[i])/(2*epsilon)

numerator = np.linalg.norm(grad-gradapprox)                                         

denominator = np.linalg.norm(grad)+np.linalg.norm(gradapprox)                                        

difference = numerator/denominator                                         

if difference > 1e-7:

print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")

else:

print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")

return difference

最后附上我作业的得分,表示我程序没有问题,如果觉得我的文章对您有用,请随意打赏,我将持续更新Deeplearning.ai的作业!

相关文章

网友评论

      本文标题:Deeplearning.ai Course-2 Week-1

      本文链接:https://www.haomeiwen.com/subject/qjhpextx.html