美文网首页
Dropout法

Dropout法

作者: LonnieQ | 来源:发表于2019-12-31 21:26 被阅读0次

    在深度学习训练过程中,为了防止过拟合问题,我们会按照一定比例把矩阵h的一部分数据置为0,并且对剩下的数据进行一个拉伸操作。设\xi_i为计算机生成的0-1随机变量,丢弃概率为p, 则dropout的计算公式如下:
    h'_i =\frac{\xi_i}{1-p}h_i
    代码实现如下:

    from mxnet import nd
    def dropout(X, dropout_prob):
        assert 0 <= dropout_prob <= 1
        keep_prob = 1 - dropout_prob
        if keep_prob == 0:
            return X.zeros_like()
        mask = nd.random.uniform(0, 1, X.shape) < keep_prob
        return mask * X / keep_prob
    

    接下来测试一下dropout函数

    X = nd.arange(16).reshape((2, 8))
    print(X) 
    

    输出:

     [[ 0.  1.  2.  3.  4.  5.  6.  7.]
     [ 8.  9. 10. 11. 12. 13. 14. 15.]]
    <NDArray 2x8 @cpu(0)>
    

    不进行dropout

    print(dropout(X, 0))
    

    输出:

    [[ 0.  1.  2.  3.  4.  5.  6.  7.]
     [ 8.  9. 10. 11. 12. 13. 14. 15.]]
    <NDArray 2x8 @cpu(0)>
    

    部分dropout

    print(dropout(X, 0.5))
    

    输出:

    [[ 0.  2.  4.  6.  0.  0.  0. 14.]
     [ 0. 18.  0.  0. 24. 26. 28.  0.]]
    <NDArray 2x8 @cpu(0)>
    

    全部dropout

    print(dropout(X, 1))
    

    输出:

    [[0. 0. 0. 0. 0. 0. 0. 0.]
     [0. 0. 0. 0. 0. 0. 0. 0.]]
    <NDArray 2x8 @cpu(0)>
    

    自定义Dropout模型

    代码如下:

    from mxnet import nd
    from mxnet.gluon import nn
    class Dropout(nn.Block):
        def __init__(self, rate, **kwargs):
            super(Dropout, self).__init__(**kwargs)
            assert 0 <= rate <= 1
            self.rate = rate
        def forward(self, x):
            keep_rate = 1 - self.rate
            if keep_rate == 0: return X.zeros_like()
            mask = nd.random.uniform(0, 1, X.shape) < keep_rate
            return mask * X / keep_rate
    

    使用方法如下:

    X = nd.arange(16).reshape((2, 8))
    dropout = Dropout(rate=1)
    print(dropout(X))
    dropout = Dropout(rate=0.5)
    print(dropout(X))
    dropout = Dropout(rate=0)
    print(dropout(X))
    

    详情参考代码:
    https://github.com/LoniQin/AwsomeNeuralNetworks/blob/master/layers/Dropout.py

    相关文章

      网友评论

          本文标题:Dropout法

          本文链接:https://www.haomeiwen.com/subject/stmioctx.html