美文网首页
Tensorflow计算梯度为NoneType:ValueErr

Tensorflow计算梯度为NoneType:ValueErr

作者: nowherespyfly | 来源:发表于2019-12-03 21:08 被阅读0次

    最近在写tensorflow代码,几天前写双向lstm的时候,就不明不白出现了一次梯度是NoneType的问题,因为当时比较赶时间就放弃了双向lstm。今天,在写transformer的时候,它再一次出现了,而且还是包裹着不一样的外衣出现的,搞得我险些放弃transformer。最后排查半天发现是同一个问题,,太智障了。

    运行tensorflow代码,偶尔会遇到梯度为None的情况,但是并不影响运行,甚至连warning都没有,所以Nonetype的梯度,它悄悄的来,悄悄的走,训模型的你甚至都不知道它的存在~

    那么有人就要问了,你是怎么发现梯度为NoneType的呢?

    这段代码是这样的:

    grads_and_vars = optimizer.compute_gradients(self.cost, var_list=tvars)
    var_lr_mult = {}
    for var in tvars:
      if var.op.name.find(r'biases') > 0:
        var_lr_mult[var] = 2.0
      elif var.name.startswith('bert'):
        var_lr_mult[var] = 0.1
      else:
        var_lr_mult[var] = 1.0
    
    grads_and_vars = [((g if var_lr_mult[v] == 1 else tf.multiply(var_lr_mult[v], g)), v) for g, v in
                              grads_and_vars]
    

    我当时想要对transformer进行微调,所以将transformer部分的学习率设为其他部分的0.1倍。然后就红红火火报错了。

    Traceback (most recent call last):
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 528, in _apply_op_helper
        preferred_dtype=default_dtype)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1297, in internal_convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 286, in _constant_tensor_conversion_function
        return constant(v, dtype=dtype, name=name)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 227, in constant
        allow_broadcast=True)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 265, in _constant_impl
        allow_broadcast=allow_broadcast))
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_util.py", line 437, in make_tensor_proto
        raise ValueError("None values not supported.")
    ValueError: None values not supported.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 542, in _apply_op_helper
        values, as_ref=input_arg.is_ref).dtype.name
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1297, in internal_convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 286, in _constant_tensor_conversion_function
        return constant(v, dtype=dtype, name=name)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 227, in constant
        allow_broadcast=True)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 265, in _constant_impl
        allow_broadcast=allow_broadcast))
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_util.py", line 437, in make_tensor_proto
        raise ValueError("None values not supported.")
    ValueError: None values not supported.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "trainval_trans.py", line 378, in <module>
        pre_emb=args.emb)
      File "trainval_trans.py", line 44, in train
        batch_size=bs, conv5=conv5)
      File "/mnt/data1/htr/projects/CVPR2020/codes/hsf_refseg_py36/get_model.py", line 7, in get_segmentation_model
        model = eval(name).LSTM_model(**kwargs)
      File "/mnt/data1/htr/projects/CVPR2020/codes/hsf_refseg_py36/bert_models/Mutan_RAGR_transformer_p5.py", line 93, in __init__
        self.train_op()
      File "/mnt/data1/htr/projects/CVPR2020/codes/hsf_refseg_py36/bert_models/Mutan_RAGR_transformer_p5.py", line 350, in train_op
        grads_and_vars]
      File "/mnt/data1/htr/projects/CVPR2020/codes/hsf_refseg_py36/bert_models/Mutan_RAGR_transformer_p5.py", line 349, in <listcomp>
        grads_and_vars = [((g if var_lr_mult[v] == 1 else tf.multiply(var_lr_mult[v], g)), v) for g, v in
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
        return target(*args, **kwargs)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/ops/math_ops.py", line 331, in multiply
        return gen_math_ops.mul(x, y, name)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 6701, in mul
        "Mul", x=x, y=y, name=name)
      File "/home/htr/anaconda3/envs/tf36/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 546, in _apply_op_helper
        (input_name, err))
    ValueError: Tried to convert 'y' to a tensor and failed. Error: None values not supported.
    

    问题出在对学习率做乘的部分。报错信息显示,乘法的第二个参数,也就是该变量的梯度,是NoneType。于是我将transformer的学习率改为1,非常神奇地,它就可以运行了。之后我陆续把transformer的学习率改成1, 2, 10等等等等,都不行。

    于是我准备曲线救国一下,干脆把transformer的学习率改成1,其他的都是10,初始学习率降10倍得了。。然后又报错了。老夫活了这么大年级,头一次见到这么傲娇的网络,学习率不是1就不走了,一瞬间我有点怀疑人生。

    无比气愤的我把变量名以及梯度打出来,发现,作妖的是一个名为bert/pooler/dense/bias的变量。此君乃是transformer中最后一层的全连接层的偏置,该全连接层负责对[CLS] token的embedding进行变换,也就是下面的tf.layers.dense层的bias变量。

    with tf.variable_scope("pooler"):
      # We "pool" the model by simply taking the hidden state corresponding
      # to the first token. We assume that this has been pre-trained
      first_token_tensor = tf.squeeze(self.sequence_output[:, 0:1, :], axis=1)
      self.pooled_output = tf.layers.dense(
        first_token_tensor,
        config.hidden_size,
        activation=tf.tanh,
        kernel_initializer=create_initializer(config.initializer_range))
    

    这让我不由得联想到之前写Bi-LSTM时,想在句子特征后面加一个全连接层,然后屡次受挫,最后不得已自己定义了一个没有bias的全连接层。难道,对于[b, c]尺寸的向量应用全连接层,注定会偏置梯度为None?这也太魔幻了。

    改来改去,我发现,如果transformer的学习率倍数跟下面做乘的判断条件中的值相同,就不会报错。想想也是,报错在最后一行,也就是tf.multiply这一行,也就是说,前面即使梯度求出来是NoneType,网络也不会理会,还是会继续运行,只有到了multiply,强制要求每个参数都是tensor的时候,才最终出事。

    总之,我最终的解决方案就是,transformer的embedding层以及encoder层学习率都设为0.1,pooler层学习率设为1,反正一个bias而已,训得不好影响也不大,爱谁谁呗。

    至此,仿佛问题都解决了。然而我还是心里有些不安,万一bias没有梯度影响了训练怎么办,万一bias没有梯度影响了训练怎么办,万一。。个鬼,反正我也不会用到pooled_output,不会用到就不会给它传梯度回去,训得不好也没影响。

    此时,我灵光一闪,突然明白了,就是因为我没有用到pooled_output,这一层的输出根本不在loss梯度的计算路径上,所以根本就没有梯度,所以梯度就是NoneType。之前也是一样,我根本从头到尾都没有用到句子特征,所以梯度也是NoneType。

    啊,真相大白了~我真是太机智了*

    相关文章

      网友评论

          本文标题:Tensorflow计算梯度为NoneType:ValueErr

          本文链接:https://www.haomeiwen.com/subject/ilehgctx.html