美文网首页
17. Large scale machine learning

17. Large scale machine learning

作者: 玄语梨落 | 来源:发表于2021-01-30 10:06 被阅读0次

    Large scale machine learning

    Learining with large datasets

    Stochastic gradient descent

    Batch gradient descent:

    J_{train}(\theta)=\frac{1}{2m}\sum\limits_{i=1}^m(h_\theta(x^{(i)}-y^{(i)})^2
    Repeat{
    \theta_j:=\theta_j-\alpha\frac{1}{m}\sum\limits_{i=1}^m(h_\theta(x^{(i)}-y^{(i)})x_j^{(i)}
    }

    Stochastic gradient descent:

    cost(\theta,(x^{)i)},y^{(i)}))=\frac{1}{2}(h_\theta(x^{(i)}-y^{(i)})^2
    J_{train}(\theta)=\frac{1}{m}\sum\limits_{i=1}^m cost(\theta,(x^{)i)},y^{(i)}))

    1. Randomly shaffle dataset
    2. Repeat{for{}}

    Mini-batch gradient descent

    Mini-batch gradient descent: Use b examples in each iteration.

    b = mini-batch size

    \Theta_j:=\Theta_j-\alpha\frac{1}{10}\sum_{k=i}^{i+9}(h_\theta(x^{(k)})-y^{(k)})x_j^{(k)}

    Stochastic gradient descent convergence

    Checking for convergence:

    • Batch gradient descent:
    • Stochastic gradient descent: Every 1000 iterations (say), plot cost(\theta,(x^{(i)},y^{(i)})) averaged ove the last 1000 examples processed by algorithm.

    For Stochastic gradient descent: Learning rate \alpha istypically held constant. Can slowly decrease \alpha over time if we want \theta to converge. (E.g. \alpha = \frac{const1}{iterationNmuber +const2})

    Online learning

    operate one data once.

    Predicte CTR (click through rate)

    Map-reduce and data parallelism

    divide all work into many parts and calculate them at the same time with different machine.

    Map-reduce and summation over the training set:

    Many learining algorithms can be expressed as computing sums of functions over the training set.

    Multi-core machines:

    相关文章

      网友评论

          本文标题:17. Large scale machine learning

          本文链接:https://www.haomeiwen.com/subject/qxfgdktx.html