美文网首页
Different regularization

Different regularization

作者: 阿o醒 | 来源:发表于2016-12-11 16:32 被阅读24次

    Different regularization methods have different effects on the learning process.

    For example,

    L2 regularization penalizes high weight values.

    L1 regularization penalizes weight values that do not equal zero.

    Adding noise to the weights during learning ensures that the learned hidden representations take extreme values.

    Sampling the hidden representations regularizes the network by pushing the hidden representation to be binary during the forward pass which limits the modeling capacity of the network.

    相关文章

      网友评论

          本文标题:Different regularization

          本文链接:https://www.haomeiwen.com/subject/zvjpmttx.html