Deep Learning学习笔记(四) 对Batch Norm

作者: Edwin_dl | 来源:发表于2017-10-24 00:43 被阅读1291次

    Batch Normalization(以下简称BN)是在GoogleInceptionNet V2的论文中被提出的,该方法减轻了如何合理初始化神经网络这个棘手问题带来的头痛。

    另一片博客主要从为什么要进行Batch Normalization,怎么进行Batch Normalization,Batch Normalization究竟做了什么等方面去阐述,可以两者结合在一起理解Batch Normalization。

    一、原理介绍

    BN是一个非常有效的正则化方法,可以让大型卷积网络的训练速度加快很多倍,同时收敛后的分类准确率也可以得到大幅提升。BN在用于神经网络某层时,会对每一个mini-batch数据的内部进行标准化处理,使输出规范化到N(0,1)的正太分布减少了内部神经元分布的改变Internal Covariate Shift)。BN论文指出,传统的深度神经网络在训练时,每一层的输入分布都在变化,导致训练变得困难,我们只能使用一个很小的学习率来解决这个问题。而对每一层使用BN之后,我们就可以有效的解决这个问题。

    二、实践细节

    在实现层面,应用这个技巧通常意味着全连接层(或者是卷积层)与激活函数之间添加一个BN层,对数据进行处理使其服从标准高斯分布。因为归一化是一个简单可求导的操作,所以上述思路是可行的。
    全连接层fc/卷积层conv--->批量归一化Batch Normalization--->激活函数activation function

    单纯使用BN获得增益并不明显,还需要一些对应的调整:

    • 增大学习速率并加快学习衰减速度以适用BN规范化后的数据;
    • 去除Dropout并减轻L2正则化(因为BN已经可以起到正则化的作用);
    • 更彻底的对训练样本进行shuffle,减少数据增强过程中对数据的光学畸变(因为BN训练更快,每个样本被训练的次数更少,因此更真实的样本对训练更有帮助)。

    三、公式推导

    前向传播过程
    前向传播过程.png
    反向传播过程
    反向传播过程.png

    四、代码实现

    前向传播过程
    def batchnorm_forward(x, gamma, beta, bn_param):
      """
      Forward pass for batch normalization.
      
      During training the sample mean and (uncorrected) sample variance are
      computed from minibatch statistics and used to normalize the incoming data.
      During training we also keep an exponentially decaying running mean of the mean
      and variance of each feature, and these averages are used to normalize data
      at test-time.
    
      At each timestep we update the running averages for mean and variance using
      an exponential decay based on the momentum parameter:
    
      running_mean = momentum * running_mean + (1 - momentum) * sample_mean
      running_var = momentum * running_var + (1 - momentum) * sample_var
    
      Note that the batch normalization paper suggests a different test-time
      behavior: they compute sample mean and variance for each feature using a
      large number of training images rather than using a running average. For
      this implementation we have chosen to use running averages instead since
      they do not require an additional estimation step; the torch7 implementation
      of batch normalization also uses running averages.
    
      Input:
      - x: Data of shape (N, D)
      - gamma: Scale parameter of shape (D,)
      - beta: Shift paremeter of shape (D,)
      - bn_param: Dictionary with the following keys:
        - mode: 'train' or 'test'; required
        - eps: Constant for numeric stability
        - momentum: Constant for running mean / variance.
        - running_mean: Array of shape (D,) giving running mean of features
        - running_var Array of shape (D,) giving running variance of features
    
      Returns a tuple of:
      - out: of shape (N, D)
      - cache: A tuple of values needed in the backward pass
      """
     
    
     mode = bn_param['mode']
      eps = bn_param.get('eps', 1e-5)
      momentum = bn_param.get('momentum', 0.9)
    
      N, D = x.shape
      running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))
      running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))
    
      out, cache = None, None
      if mode == 'train':
        # Compute output
        mu = x.mean(axis=0)
        xc = x - mu
        var = np.mean(xc ** 2, axis=0)
        std = np.sqrt(var + eps)
        xn = xc / std
        out = gamma * xn + beta
    
        cache = (mode, x, gamma, xc, std, xn, out)
    
        # Update running average of mean
        running_mean *= momentum
        running_mean += (1 - momentum) * mu
    
        # Update running average of variance
        running_var *= momentum
        running_var += (1 - momentum) * var
      elif mode == 'test':
        # Using running mean and variance to normalize
        std = np.sqrt(running_var + eps)
        xn = (x - running_mean) / std
        out = gamma * xn + beta
        cache = (mode, x, xn, gamma, beta, std)
      else:
        raise ValueError('Invalid forward batchnorm mode "%s"' % mode)
    
      # Store the updated running means back into bn_param
      bn_param['running_mean'] = running_mean
      bn_param['running_var'] = running_var
    
      return out, cache
    
    反向传播过程
    def batchnorm_backward(dout, cache):
      """
      Backward pass for batch normalization.
      
      For this implementation, you should write out a computation graph for
      batch normalization on paper and propagate gradients backward through
      intermediate nodes.
      
      Inputs:
      - dout: Upstream derivatives, of shape (N, D)
      - cache: Variable of intermediates from batchnorm_forward.
      
      Returns a tuple of:
      - dx: Gradient with respect to inputs x, of shape (N, D)
      - dgamma: Gradient with respect to scale parameter gamma, of shape (D,)
      - dbeta: Gradient with respect to shift parameter beta, of shape (D,)
      """
     
    
     mode = cache[0]
      if mode == 'train':
        mode, x, gamma, xc, std, xn, out = cache
    
        N = x.shape[0]
        dbeta = dout.sum(axis=0)
        dgamma = np.sum(xn * dout, axis=0)
        dxn = gamma * dout
        dxc = dxn / std
        dstd = -np.sum((dxn * xc) / (std * std), axis=0)
        dvar = 0.5 * dstd / std
        dxc += (2.0 / N) * xc * dvar
        dmu = np.sum(dxc, axis=0)
        dx = dxc - dmu / N
      elif mode == 'test':
        mode, x, xn, gamma, beta, std = cache
        dbeta = dout.sum(axis=0)
        dgamma = np.sum(xn * dout, axis=0)
        dxn = gamma * dout
        dx = dxn / std
      else:
        raise ValueError(mode)
    
      return dx, dgamma, dbeta
    

    在实践中,使用了批量归一化的网络对于不好的初始值有更强的鲁棒性。总结起来说就是批量归一化可以理解为在网络的每一层之前都做预处理,只是这种操作以另一种方式与网络集成在了一起。

    相关文章

      网友评论

        本文标题:Deep Learning学习笔记(四) 对Batch Norm

        本文链接:https://www.haomeiwen.com/subject/qubzuxtx.html