美文网首页其他
tensorflow pb,meta,cpkt有什么区别

tensorflow pb,meta,cpkt有什么区别

作者: 小姐姐催我改备注 | 来源:发表于2019-05-21 09:26 被阅读0次

    主要是针对部署中,先来看看区别

    import tensorflow as tf
    
    v1 = tf.Variable(tf.constant(1.,shape=[1]),name='v1')
    v2 = tf.Variable(tf.constant(2.,shape=[1]),name='v2')
    result = v1 + v2
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        graph_def = tf.get_default_graph().as_graph_def()
        output_grapg_def = tf.graph_util.convert_variables_to_constants(sess,graph_def,['add'])
    
        with tf.gfile.GFile('Model/freeze_model.pb','wb') as f:
            f.write(output_grapg_def.SerializeToString())
    

    上面是保存pb格式的文件,
    我们来看看对应的网络格式


    pb

    这里我们能看到,只有一个节点,对应的是+操作,两个变量都是固定下来的。

    我们在来看看对应meta文件对应的图:

    import tensorflow as tf
    
    v1 = tf.Variable(tf.constant(1.0,shape=[1]),name='v1')
    v2 = tf.Variable(tf.constant(2.0,shape=[1]),name='v2')
    result = v1 + v2
    
    saver = tf.train.Saver()
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        print(sess.run(result))
        saver.save(sess,'Model/model.cpkt')
    

    上面是保存meta文件,我们来看看对应的图:


    meta

    我们可以看到这里有一大坨流向,而我们只要得到对应的Add哪一个就可以了。所以,这就是问题所在,在部署的过程中,我们并不需要这么复杂的流程图,我们只需要一个简单的输入输出以及前向计算过程,我们怎么样才能把图裁剪成我们所需要的呢。

    2

    这里我们首先来看看,在把文件保存成pb中,做了什么;
    官方解释是把变量固定下来,这里我们理解一下变量这个东西,在计算过程中,变量就是y=wx中的w,x是输入,y是输出,我们根据不同的(x,y)来修改w的值。但是在部署的时候,我们是希望w是不变的,我们给定x,希望输出是y。好了第一个问题解决了。

    第二问题,Assign,Identity这些东西是什么呢?
    https://www.jianshu.com/p/bebcdfb74fb1?utm_campaig
    这里引用这一片文章,很详细可以去看看。这里说的是,Assign,Identity,这里都是为变量服务的,是一种同位关系。在变量固定下来之后,对应其对应的同为关系也应该是没有的。

    3占位符.placeholder 我们该怎么处理呢?处理之后是什么呢?

    from __future__ import print_function
    
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets('MNIST_data/',one_hot=True)
    
    import tensorflow as tf
    
    learning_rate = 0.001
    batch_size = 100
    displaystep =1
    model_path = './model/model.ckpt'
    
    n_hiddern_1 = 256
    n_hiddern_2 = 256
    n_input = 784
    n_classes = 10
    
    x = tf.placeholder('float',[None,n_input])
    y = tf.placeholder('float',[None,n_classes])
    
    def mutilnet_predict(x,weights,bias):
        layer1 = tf.add(tf.matmul(x,weights['h1']),bias['b1'])
        layer1 = tf.nn.relu(layer1)
    
        layer2 = tf.add(tf.matmul(layer1,weights['h2']),bias['b2'])
        layer2 = tf.nn.relu(layer2)
    
        outlayer = tf.matmul(layer2,weights['out']) + bias['out']
        return outlayer
    
    weights = {
        'h1': tf.Variable(tf.random_normal([n_input,n_hiddern_1])),
        'h2': tf.Variable(tf.random_normal([n_hiddern_1,n_hiddern_2])),
        'out': tf.Variable(tf.random_normal([n_hiddern_2,n_classes]))
    }
    
    bias = {
        'b1': tf.Variable(tf.random_normal([n_hiddern_1])),
        'b2': tf.Variable(tf.random_normal([n_hiddern_2])),
        'out': tf.Variable(tf.random_normal([n_classes]))
    }
    
    pred = mutilnet_predict(x,weights,bias)
    
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=pred))
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
    
    init_op = tf.global_variables_initializer()
    
    saver = tf.train.Saver()
    print("Starting 1st session")
    with tf.Session() as sess:
        sess.run(init_op)
    
        for eopch in range(3):
            avg_cost = 0.
            total_batch = int(mnist.train.num_examples/batch_size)
    
            for i in range(total_batch):
                batch_x,batch_y = mnist.train.next_batch(batch_size)
                _,c = sess.run( [optimizer,cost],feed_dict={x:batch_x,y:batch_y})
    
                avg_cost += c/total_batch
    
            if eopch%displaystep == 0:
                print('Epoch:','%04d'%(eopch+1),'cost=','{:.9f}'.format(avg_cost))
    
        print('First optimizer  finished!')
    
        corect_pred = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
        accuracy = tf.reduce_mean(tf.cast(corect_pred,'float'))
        print('Accuracy:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
    
        save_path = saver.save(sess,model_path)
        print('Model saved in file: %s  '%save_path)
    
    print('Starting 2nd session......')
    with tf.Session() as sess:
        sess.run(init_op)
    
        saver.restore(sess,model_path)
        print('Model restored from file :%s'%model_path)
    
        for epoch in range(7):
            avg_cost = 0.
            total_batch = int(mnist.train.num_examples / batch_size)
    
            for i in range(total_batch):
                batch_x, batch_y = mnist.train.next_batch(batch_size)
                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
    
                avg_cost += c / total_batch
    
            if epoch % displaystep == 0:
                print('Epoch:', '%04d' % (epoch + 1), 'cost=', '{:.9f}'.format(avg_cost))
    
        print('Second optimizer finished!')
    
    
        corect_pred = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
        accuracy = tf.reduce_mean(tf.cast(corect_pred,'float'))
        print('Accuracy:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
    
    
    
        graph_def = tf.get_default_graph().as_graph_def()
        for key in tf.get_default_graph().get_operations():
            print(key.name)
        output_grapg_def = tf.graph_util.convert_variables_to_constants(sess, graph_def, ['add_2'])
        with tf.gfile.GFile('Model/freeze_model.pb', 'wb') as f:
            f.write(output_grapg_def.SerializeToString())
    

    先来看两张图:

    meta pb

    对比上面两张图,可以看出,meta文件一大坨,不能部署,但是对应pb文件只是一个前向过程。一个输入,单输出。

    3。用opencv dnn调用

    #include<opencv2/opencv.hpp>
    #include<opencv2/dnn.hpp>
    #include<iostream>
    using namespace std;
    using namespace cv;
    
    int main()
    {
    
        String weights = "D:/github/id_demo/id_demo/id_demo/model_lib/frozen_model.pb";
        String prototxt = "D:/github/id_demo/id_demo/id_demo/model_lib/frozen_model.pbtxt";
    
        dnn::Net net = cv::dnn::readNetFromTensorflow(weights);
        
    
        cout << "今天是个好天气" << endl;
        return 0;
    
    }
    

    这里我们的程序是可以起来的。我们来给个输入看看。这里我们换一个模型来看看,mnist数据是784的一维数据,不如图片来的方便。

    4.猫狗分类:

    import tensorflow as tf
    import alexnet
    import os
    import numpy as np
    import input_data
    n_classes = 2
    image_w =224
    image_h =224
    batch_size =32
    capacity =  128
    max_step= 3000
    learning_rate= 0.01
    
    def run_ning():
        traindir = 'F:/msra/kaggle/train/'
        logs_train_dit = './logs_dir'
    
        train_list,label_list = input_data.get_files(traindir)
        train_batch,label_batch = input_data.get_batch(train_list,label_list,image_w,image_h,batch_size,capacity)
    
        train_logits = alexnet.inference(train_batch,batch_size,n_classes)
        train_loss = alexnet.losses(train_logits,label_batch)
        train_op = alexnet.training(train_logits,learning_rate)
        train_acc = alexnet.evaluation(train_logits,label_batch)
    
        summary_merge_op = tf.summary.merge_all()
        init_op = tf.global_variables_initializer()
        saver = tf.train.Saver()
    
    
        with tf.Session() as sess:
            sess.run(init_op)
            train_writer = tf.summary.FileWriter(logs_train_dit, sess.graph)
            coord = tf.train.Coordinator()
            threads = tf.train.start_queue_runners(sess=sess,coord=coord)
    
            try:
                for step in np.arange(max_step):
                    if coord.should_stop():
                        break
                    _,tra_loss,tra_acc = sess.run([train_op,train_loss,train_acc])
    
                    if step % 100 ==0:
                        print("Step %d,------,train loss: %1.5f,-----train acc:%1.5f"%(step,tra_loss,tra_acc))
                        summary_str = sess.run(summary_merge_op)
                        train_writer.add_summary(summary_str,global_step=step)
                    if step % 2000 == 0 or (step + 1) == max_step:
                        checkpoint_path = os.path.join(logs_train_dit, "model.ckpt")
                        saver.save(sess, checkpoint_path, global_step=step)
    
                graph_def = tf.get_default_graph().as_graph_def()
                for key in tf.get_default_graph().get_operations():
                    print(key.name)
                output_grapg_def = tf.graph_util.convert_variables_to_constants(sess, graph_def, ['softmax/Softmax'])
                with tf.gfile.GFile(os.path.join(logs_train_dit,'frezze_model.pb'), 'wb') as f:
                    f.write(output_grapg_def.SerializeToString())
            finally:
                coord.request_stop()
            coord.join(threads)
    
    run_ning()
    

    网络结构是AlexNet

    '''
    alex net
    '''
    import  tensorflow as tf
    
    def print_conv_name(conv):
       print(conv.op.name,' ',conv.get_shape().as_list())
    
    def inference(images,batch_size,n_classes):
       pamaters = []
       with tf.name_scope('conv1') as scope:
           kernel = tf.Variable(tf.truncated_normal([11,11,3,64],mean=0,stddev=0.1,dtype=tf.float32),name='weights')
           conv = tf.nn.conv2d(images,kernel,[1,4,4,1],padding='SAME')
           biases = tf.Variable(tf.constant(0,shape=[64],dtype=tf.float32),trainable=True,name='biase')
           bias = tf.nn.bias_add(conv,biases)
           conv1 = tf.nn.relu(bias,name=scope)
    
           pamaters +=[kernel,biases]
           print_conv_name(conv1)
    
           pool1 = tf.nn.max_pool(conv1,ksize=[1,3,3,1],strides=[1,2,2,1],padding='VALID',name='pool1')
           print_conv_name(pool1)
    
       with tf.name_scope('conv2') as scope:
           kernel = tf.Variable(tf.truncated_normal([5,5,64,192],stddev=0.1,dtype=tf.float32),name='weights')
           conv = tf.nn.conv2d(pool1,kernel,[1,1,1,1],padding='SAME')
           biases = tf.Variable(tf.constant(0,shape=[192],dtype=tf.float32),trainable=True,name='biases')
           biase = tf.nn.bias_add(conv,biases)
           conv2 = tf.nn.relu(biase,name=scope)
    
           pamaters +=[kernel,biases]
           print_conv_name(conv2)
    
           pool2 = tf.nn.max_pool(conv2,ksize=[1,3,3,1],strides=[1,2,2,1],padding='VALID',name='pool2')
           print_conv_name(pool2)
    
       with tf.name_scope('conv3') as scope:
           kernel = tf.Variable(tf.truncated_normal([3,3,192,384],stddev=0.1,dtype=tf.float32),name='weights')
           conv = tf.nn.conv2d(pool2,kernel,[1,1,1,1],padding='SAME')
           biases = tf.Variable(tf.constant(0,shape=[384],dtype=tf.float32),trainable=True ,name='biases')
           bias = tf.nn.bias_add(conv,biases)
           conv3 = tf.nn.relu(bias,name=scope)
           print_conv_name(conv3)
           pamaters +=[kernel,biases]
    
       with tf.name_scope('conv4') as scope:
           kernel = tf.Variable(tf.truncated_normal([3,3,384,256],stddev=0.1,dtype=tf.float32),name='weights')
           conv = tf.nn.conv2d(conv3,kernel,[1,1,1,1],padding='SAME')
           biases = tf.Variable(tf.constant(0,shape=[256],dtype=tf.float32),trainable=True,name='biases')
           bias = tf.nn.bias_add(conv,biases)
           conv4 = tf.nn.relu(bias,name=scope)
           print_conv_name(conv4)
           pamaters +=[kernel,biases]
    
    
       with tf.name_scope('conv5') as scope:
           kernel = tf.Variable(tf.truncated_normal([3,3,256,256],stddev=0.1,dtype=tf.float32),name='weights')
           conv = tf.nn.conv2d(conv4,kernel,[1,1,1,1],padding='SAME')
           biases = tf.Variable(tf.constant(0,shape=[256],dtype=tf.float32),name='biases')
           bias = tf.nn.bias_add(conv,biases)
           conv5 = tf.nn.relu(bias,name=scope)
           print_conv_name(conv5)
           pamaters +=[kernel,biases]
    
           pool5 = tf.nn.max_pool(conv5,ksize=[1,3,3,1],strides=[1,2,2,1],padding='VALID',name='pool5')
           print_conv_name(pool5)
    
       with tf.name_scope('fc1') as scope:
           reshpe = tf.reshape(pool5,(batch_size,-1))
           dim = reshpe.get_shape()[1].value
           print(dim)
           weight6 =tf.Variable(tf.truncated_normal([dim,4096],stddev=0.1,dtype=tf.float32),name='weights6')
           ful_bias1 = tf.Variable(tf.constant(0,shape=[4096],dtype=tf.float32),name='ful_bias1')
           ful_con1 = tf.nn.relu(tf.add(tf.matmul(reshpe,weight6),ful_bias1))
    
       with tf.name_scope('fc2') as scope:
           weights = tf.Variable(tf.truncated_normal([4096,4096],stddev=0.1,dtype=tf.float32),name='weights')
           ful_bias2 = tf.Variable(tf.constant(0,shape=[4096],dtype=tf.float32),name='biases')
           ful_con2 = tf.nn.relu(tf.add(tf.matmul(ful_con1,weights),ful_bias2))
    
       with tf.name_scope('softmax') as scope:
           weights = tf.Variable(tf.truncated_normal([4096, n_classes],stddev=0.1, dtype=tf.float32), name='weights')
           ful_bias3 = tf.Variable(tf.constant(0, shape=[n_classes], dtype=tf.float32), name='biases')
           ful_con3 = tf.add(tf.matmul(ful_con2,weights),ful_bias3)
           softmax_out = tf.nn.softmax(ful_con3)
    
       return softmax_out
    
    # image_size =224
    # images = tf.Variable(tf.random_normal([128,image_size,image_size,3],dtype=tf.float32,stddev=0.1))
    # output = inference(images,128,10)
    # init_op = tf.global_variables_initializer()
    # with tf.Session() as sees:
    #     sees.run(init_op)
    #     out = sees.run(output)
    #     for key in tf.get_default_graph().get_operations():
    #         print(key.name)
    #     print(out)
    
    def losses(logits,labels):
       with tf.name_scope('loss') as scope:
           cross_entrop = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,labels=labels,name='cross_entrop')
           loss = tf.reduce_mean(cross_entrop,name='loss')
           tf.summary.scalar('loss',loss)
       return loss
    
    def training(loss,learning_rate):
       with tf.name_scope('optimizer'):
           optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
           global_step= tf.Variable(0,name='global_step',trainable=False)
           train_op = optimizer.minimize(loss,global_step=global_step)
       return train_op
    
    def evaluation(logits,labels):
       with tf.variable_scope('accuracy') as scope:
           correct = tf.nn.in_top_k(logits,labels,1)
           correct = tf.cast(correct,tf.float16)
           accuracy = tf.reduce_mean(correct)
           tf.summary.scalar('accuracy',accuracy)
       return accuracy
    

    直接来看两张图片

    pb meta

    这里用C++来测试,


    C++

    这里报错了,猜想可能是输入的问题,下一步来解决输入的问题。


    pb
    猜想可能要把输入变成这样,才能调用接口通过。 image.png

    这里我们把输入变成占位符形式就可以。

    5.裁剪输入

    相关文章

      网友评论

        本文标题:tensorflow pb,meta,cpkt有什么区别

        本文链接:https://www.haomeiwen.com/subject/jmgsoqtx.html