美文网首页我爱编程
Tensorflow教程(十三) tf.Variable() 和

Tensorflow教程(十三) tf.Variable() 和

作者: 致Great | 来源:发表于2018-06-20 20:24 被阅读824次

    1 简介

    tf.Variable()

    tf.Variable(initial_value=None, trainable=True, collections=None, validate_shape=True, 
    caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, 
    import_scope=None)
    

    tf.get_variable()

    tf.get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, 
    trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, 
    custom_getter=None)
    

    2 区别

    1、使用tf.Variable时,如果检测到命名冲突,系统会自己处理。使用tf.get_variable()时,系统不会处理冲突,而会报错

    import tensorflow as tf
    w_1 = tf.Variable(3,name="w_1")
    w_2 = tf.Variable(1,name="w_1")
    print w_1.name
    print w_2.name
    #输出
    #w_1:0
    #w_1_1:0
    
    import tensorflow as tf
    
    w_1 = tf.get_variable(name="w_1",initializer=1)
    w_2 = tf.get_variable(name="w_1",initializer=2)
    #错误信息
    #ValueError: Variable w_1 already exists, disallowed. Did
    #you mean to set reuse=True in VarScope?
    

    2、基于这两个函数的特性,当我们需要共享变量的时候,需要使用tf.get_variable()。在其他情况下,这两个的用法是一样的

    import tensorflow as tf
    
    with tf.variable_scope("scope1"):
        w1 = tf.get_variable("w1", shape=[])
        w2 = tf.Variable(0.0, name="w2")
    with tf.variable_scope("scope1", reuse=True):
        w1_p = tf.get_variable("w1", shape=[])
        w2_p = tf.Variable(1.0, name="w2")
    
    print(w1 is w1_p, w2 is w2_p)
    #输出
    #True  False
    

    由于tf.Variable() 每次都在创建新对象,所有reuse=True 和它并没有什么关系。对于get_variable(),来说,如果已经创建的变量对象,就把那个对象返回,如果没有创建变量对象的话,就创建一个新的。

    以上内容来自于:tensorflow学习笔记(二十三):variable与get_variable

    3 实例

    import os
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    
    import tensorflow as tf
    
    x1 = tf.truncated_normal([200, 100], name='x1')
    x2 = tf.truncated_normal([200, 100], name='x2')
    
    def two_hidden_layers_1(x):
        assert x.shape.as_list() == [200, 100]
        w1 = tf.Variable(tf.random_normal([100, 50]), name='h1_weights')
        b1 = tf.Variable(tf.zeros([50]), name='h1_biases')
        h1 = tf.matmul(x, w1) + b1
        assert h1.shape.as_list() == [200, 50]
        w2 = tf.Variable(tf.random_normal([50, 10]), name='h2_weights')
        b2 = tf.Variable(tf.zeros([10]), name='2_biases')
        logits = tf.matmul(h1, w2) + b2
        return logits
    
    def two_hidden_layers_2(x):
        assert x.shape.as_list() == [200, 100]
        w1 = tf.get_variable('h1_weights', [100, 50], initializer=tf.random_normal_initializer())
        b1 = tf.get_variable('h1_biases', [50], initializer=tf.constant_initializer(0.0))
        h1 = tf.matmul(x, w1) + b1
        assert h1.shape.as_list() == [200, 50]
        w2 = tf.get_variable('h2_weights', [50, 10], initializer=tf.random_normal_initializer())
        b2 = tf.get_variable('h2_biases', [10], initializer=tf.constant_initializer(0.0))
        logits = tf.matmul(h1, w2) + b2
        return logits
    
    
    def fully_connected(x, output_dim, scope):
        with tf.variable_scope(scope, reuse=tf.AUTO_REUSE) as scope:
            w = tf.get_variable('weights', [x.shape[1], output_dim], initializer=tf.random_normal_initializer())
            b = tf.get_variable('biases', [output_dim], initializer=tf.constant_initializer(0.0))
            return tf.matmul(x, w) + b
    
    def two_hidden_layers_3(x):
        h1 = fully_connected(x, 50, 'h1')
        h2 = fully_connected(h1, 10, 'h2')
        return h2
    # with tf.variable_scope('two_layers') as scope:
    #     logits1 = two_hidden_layers_1(x1) 
    #     # scope.reuse_variables()
    #     logits2 = two_hidden_layers_1(x2)
    # 不会报错
    # ---------------
    
    # with tf.variable_scope('two_layers') as scope:
    #     logits1 = two_hidden_layers_2(x1)
    #     # scope.reuse_variables()
    #     logits2 = two_hidden_layers_2(x2)
    # 会报错
    # ---------------
    
    with tf.variable_scope('two_layers') as scope:
        logits1 = two_hidden_layers_3(x1)
        # scope.reuse_variables()
        logits2 = two_hidden_layers_3(x2)
    # 不会报错
    # -------
    writer = tf.summary.FileWriter('./graphs/cool_variables', tf.get_default_graph())
    writer.close()
    
    
    

    相关文章

      网友评论

        本文标题:Tensorflow教程(十三) tf.Variable() 和

        本文链接:https://www.haomeiwen.com/subject/tftfyftx.html