5 Optimizer-庖丁解牛之pytorch

作者: readilen | 来源:发表于2018-10-23 00:09 被阅读2次

    优化器是机器学习的很重要部分,但是在很多机器学习和深度学习的应用中,我们发现用的最多的优化器是 Adam,为什么呢?pytorch有多少优化器,我什么时候使用其他优化器?本文将详细讲述:
    在torch.optim 包中有如下优化器

    torch.optim.adam.Adam
    torch.optim.adadelta.Adadelta
    torch.optim.adagrad.Adagrad
    torch.optim.sparse_adam.SparseAdam
    torch.optim.adamax.Adamax
    torch.optim.asgd.ASGD
    torch.optim.sgd.SGD
    torch.optim.rprop.Rprop
    torch.optim.rmsprop.RMSprop
    torch.optim.optimizer.Optimizer
    torch.optim.lbfgs.LBFGS
    torch.optim.lr_scheduler.ReduceLROnPlateau
    

    这些优化器都派生自Optimizer,这是一个所有优化器的基类,我们来看看这个基类:

    class Optimizer(object):
        def __init__(self, params, defaults):
            self.defaults = defaults
            self.state = defaultdict(dict)
            self.param_groups =  list(params)
    
            for param_group in param_groups:
                self.add_param_group(param_group)
    
    • params 代表网络的参数,是一个可以迭代的对象net.parameters()
    • 第二个参数default是一个字典,存储学习率等变量的值。
      构造函数最重要的工作就是把params加入到param_groups组中

    zero_grad

        def zero_grad(self):
            r"""Clears the gradients of all optimized :class:`torch.Tensor` s."""
            for group in self.param_groups:
                for p in group['params']:
                    if p.grad is not None:
                        p.grad.detach_()
                        p.grad.zero_()
    

    遍历param_groups,将每个组中参数值,有梯度的都解除链接,然后清零。

    state_dict

        def state_dict(self):
           ......
            param_groups = [pack_group(g) for g in self.param_groups]
            # Remap state to use ids as keys
            packed_state = {(id(k) if isinstance(k, torch.Tensor) else k): v
                            for k, v in self.state.items()}
            return {
                'state': packed_state,
                'param_groups': param_groups,
            }
    

    state当前优化器状态,param_groups,整理格式,以字典方式返回

    def load_state_dict(self, state_dict):
            state = defaultdict(dict)
            for k, v in state_dict['state'].items():
                if k in id_map:
                    param = id_map[k]
                    state[param] = cast(param, v)
                else:
                    state[k] = v
    
            # Update parameter groups, setting their 'params' value
            param_groups = [
                update_group(g, ng) for g, ng in zip(groups, saved_groups)]
            self.__setstate__({'state': state, 'param_groups': param_groups})
    

    整理格式,更新state和param_groups

    Adam

    Adam 这个名字来源于 adaptive moment estimation,自适应矩估计。概率论中矩的含义是:如果一个随机变量 X 服从某个分布,X 的一阶矩是 E(X),也就是样本平均值,X 的二阶矩就是 E(X^2),也就是样本平方的平均值。Adam 算法根据损失函数对每个参数的梯度的一阶矩估计和二阶矩估计动态调整针对于每个参数的学习速率。Adam 也是基于梯度下降的方法,但是每次迭代参数的学习步长都有一个确定的范围,不会因为很大的梯度导致很大的学习步长,参数的值比较稳定。

                    exp_avg.mul_(beta1).add_(1 - beta1, grad)
                    exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
                    if amsgrad:
                        torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
                        denom = max_exp_avg_sq.sqrt().add_(group['eps'])
                    else:
                        denom = exp_avg_sq.sqrt().add_(group['eps'])
                    bias_correction1 = 1 - beta1 ** state['step']
                    bias_correction2 = 1 - beta2 ** state['step']
                    step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1 # 动态调整计算步长
                    p.data.addcdiv_(-step_size, exp_avg, denom) # 更新值
    

    Adagrad

                        state['sum'].addcmul_(1, grad, grad)
                        std = state['sum'].sqrt().add_(1e-10)
                        p.data.addcdiv_(-clr, grad, std)
    

    据说这个梯度可变,先累加addcmul_平方,还带根号,防止除零还带平滑项1e-10,果然代码不骗人

    Adadelta

    其实Adagrad累加平方和梯度也会猛烈下降,如果限制把历史梯度累积窗口限制到固定的尺寸,学习的过程中自己变化,看看下面的代码能读出这个意思吗?

                    square_avg.mul_(rho).addcmul_(1 - rho, grad, grad)
                    std = square_avg.add(eps).sqrt_()
                    delta = acc_delta.add(eps).sqrt_().div_(std).mul_(grad)
                    p.data.add_(-group['lr'], delta)
                    acc_delta.mul_(rho).addcmul_(1 - rho, delta, delta)
    

    SparseAdam

    实现适用于稀疏张量的Adam算法的懒惰版本。在这个变体中,只有在渐变中出现的时刻才会更新,只有渐变的那些部分才会应用于参数。

                    exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
                    beta1, beta2 = group['betas']
    
                    # Decay the first and second moment running average coefficient
                    #      old <- b * old + (1 - b) * new
                    # <==> old += (1 - b) * (new - old)
                    old_exp_avg_values = exp_avg._sparse_mask(grad)._values()
                    exp_avg_update_values = grad_values.sub(old_exp_avg_values).mul_(1 - beta1)
                    exp_avg.add_(make_sparse(exp_avg_update_values))
                    old_exp_avg_sq_values = exp_avg_sq._sparse_mask(grad)._values()
                    exp_avg_sq_update_values = grad_values.pow(2).sub_(old_exp_avg_sq_values).mul_(1 - beta2)
                    exp_avg_sq.add_(make_sparse(exp_avg_sq_update_values))
    
                    # Dense addition again is intended, avoiding another _sparse_mask
                    numer = exp_avg_update_values.add_(old_exp_avg_values)
                    exp_avg_sq_update_values.add_(old_exp_avg_sq_values)
                    denom = exp_avg_sq_update_values.sqrt_().add_(group['eps'])
                    del exp_avg_update_values, exp_avg_sq_update_values
    
                    bias_correction1 = 1 - beta1 ** state['step']
                    bias_correction2 = 1 - beta2 ** state['step']
                    step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
    
                    p.data.add_(make_sparse(-step_size * numer.div_(denom)))
    

    这么复杂的公式,只能看出通过一个矩阵计算,然后更新梯度

    Adamax

                    torch.max(norm_buf, 0, keepdim=False, out=(exp_inf, exp_inf.new().long()))
    
                    bias_correction = 1 - beta1 ** state['step']
                    clr = group['lr'] / bias_correction
    
                    p.data.addcdiv_(-clr, exp_avg, exp_inf)
    

    看到torch.max估计明白为甚叫Adamax了,给学习率的边界做个上限

    ASGD

                    state['step'] += 1
    
                    if group['weight_decay'] != 0:
                        grad = grad.add(group['weight_decay'], p.data)
    
                    # decay term
                    p.data.mul_(1 - group['lambd'] * state['eta'])
    
                    # update parameter
                    p.data.add_(-state['eta'], grad)
    
                    # averaging
                    if state['mu'] != 1:
                        state['ax'].add_(p.data.sub(state['ax']).mul(state['mu']))
                    else:
                        state['ax'].copy_(p.data)
    
                    # update eta and mu
                    state['eta'] = (group['lr'] /
                                    math.pow((1 + group['lambd'] * group['lr'] * state['step']), group['alpha']))
                    state['mu'] = 1 / max(1, state['step'] - group['t0'])
    

    使劲看,唯一能看出平均的含义就是eta 和 mu要累加统计。

    SGD

                    d_p = p.grad.data # 梯度值
                    ...
                    p.data.add_(-group['lr'], d_p) # 更新值,只是一个lr和梯度
    

    减去学习率和梯度值的乘积,果然够简单

    Rprop

                    # update stepsizes with step size updates
                    step_size.mul_(sign).clamp_(step_size_min, step_size_max)
    
                    # for dir<0, dfdx=0
                    # for dir>=0 dfdx=dfdx
                    grad = grad.clone()
                    grad[sign.eq(etaminus)] = 0
    
                    # update parameters
                    p.data.addcmul_(-1, grad.sign(), step_size)
    

    设定变化范围,根据符合调整

    RMSprop

                    square_avg = state['square_avg']
                    alpha = group['alpha']
    
                    state['step'] += 1
    
                    if group['weight_decay'] != 0:
                        grad = grad.add(group['weight_decay'], p.data)
    
                    square_avg.mul_(alpha).addcmul_(1 - alpha, grad, grad)
    
                    if group['centered']:
                        grad_avg = state['grad_avg']
                        grad_avg.mul_(alpha).add_(1 - alpha, grad)
                        avg = square_avg.addcmul(-1, grad_avg, grad_avg).sqrt().add_(group['eps'])
                    else:
                        avg = square_avg.sqrt().add_(group['eps'])
    
                    if group['momentum'] > 0:
                        buf = state['momentum_buffer']
                        buf.mul_(group['momentum']).addcdiv_(grad, avg)
                        p.data.add_(-group['lr'], buf)
                    else:
                        p.data.addcdiv_(-group['lr'], grad, avg)
    

    记录每一次梯度变化,由梯度变化决定更新比例,根据符号调整步长

    LBFGS

    用向量代替矩阵,进行类似迭代,这个代码太晦涩了,有兴趣可以查看
    https://www.cnblogs.com/ljy2013/p/5129294.html

    ReduceLROnPlateau

    相关文章

      网友评论

        本文标题:5 Optimizer-庖丁解牛之pytorch

        本文链接:https://www.haomeiwen.com/subject/mekszftx.html