美文网首页
梯度下降Intuition

梯度下降Intuition

作者: 刷刷人工智能 | 来源:发表于2016-12-22 18:07 被阅读32次

    Gradient Descent Intuition

    In this video we explored the scenario where we used one parameterθ1and plotted its cost function to implement a gradient descent. Our formula for a single parameter was :

    Repeat until convergence:

    θ1:=θ1−αddθ1J(θ1)

    Regardless of the slope's sign forddθ1J(θ1),θ1eventually converges to its minimum value. The following graph shows that when the slope is negative, the value ofθ1increases and when it is positive, the value ofθ1decreases.

    On a side note, we should adjust our parameterαto ensure that the gradient descent algorithm converges in a reasonable time. Failure to converge or too much time to obtain the minimum value imply that our step size is wrong.

    How does gradient descent converge with a fixed step sizeα?

    The intuition behind the convergence is thatddθ1J(θ1)approaches 0 as we approach the bottom of our convex function. At the minimum, the derivative will always be 0 and thus we get:

    θ1:=θ1−α∗0

    相关文章

      网友评论

          本文标题:梯度下降Intuition

          本文链接:https://www.haomeiwen.com/subject/ogdhvttx.html