美文网首页
使用tensorflow搭建一个神经网络,实现一个分类问题

使用tensorflow搭建一个神经网络,实现一个分类问题

作者: volition4_4 | 来源:发表于2018-11-24 15:25 被阅读15次

    工欲善其事必先利其器,首先,我们来说说关于环境搭建的问题。

    安装的方法有一万种,但是我还是推荐下面这种安装方法,简单方便,不会出现很多莫名其妙的问题。
    Anaconda + Jupyter + tensorflow

    安装的具体流程见下面的视频链接:
    https://www.youtube.com/watch?v=G2GqLWOERjQ (需要科学上网)

    数据集

    数据集采用的比利时这个国家的交通标志。从 https://btsd.ethz.ch/shareddata/ 可以获得数据, BelgiumTSC_Training (171.3MBytes)和 BelgiumTSC_Testing (76.5MBytes)分别代表我们的训练数据和测试数据。

    数据集的说明

    Trainging文件夹中有62个文件夹,每一个文件夹中若干张图片,文件夹中图片就是我们的属性,标签是文件夹的名字。
    我们的训练目标就是,给定一张图片,判断这张图片属于哪一个文件夹(分类问题)。

    上干货,代码!

    导入需要的依赖包

    import tensorflow as tf
    from skimage import transform
    from skimage import data
    import matplotlib.pyplot as plt
    import os
    import numpy as np
    from skimage.color import rgb2gray
    import random
    

    加载数据,并创建训练集的属性和标签

    def load_data(data_dir):
        # Get all subdirectories of data_dir. Each represents a label.
        directories = [d for d in os.listdir(data_dir) 
                       if os.path.isdir(os.path.join(data_dir, d))]
    #     print(directories)
        # Loop through the label directories and collect the data in
        # two lists, labels and images.
        labels = []
        images = []
        for d in directories:
            label_dir = os.path.join(data_dir, d)
            file_names = [os.path.join(label_dir, f) 
                          for f in os.listdir(label_dir) 
                          if f.endswith(".ppm")]
            for f in file_names:
                images.append(data.imread(f))
                labels.append(int(d))
        return images, labels
    
    ROOT_PATH = "E:/machineLearning/tensorflow/data/"  #这里需要根据自己数据存放的路径进行修改
    train_data_dir = os.path.join(ROOT_PATH, "BelgiumTSC_Training/Training")
    test_data_dir = os.path.join(ROOT_PATH, "BelgiumTSC_Testing/Testing")
    
    images, labels = load_data(train_data_dir)
    
    images_array = np.array(images)
    labels_array = np.array(labels)
    
    # Print the `images` dimensions
    print(images_array.ndim)
    
    # Print the number of `images`'s elements
    print(images_array.size)
    
    # Print the first instance of `images`
    # print(images_array[0])
    
    # Print the `labels` dimensions
    print(labels_array.ndim)
    
    # Print the number of `labels`'s elements
    print(labels_array.size)
    
    # Count the number of labels
    print(len(set(labels_array)))
    
    image.png

    特征抽取

    缩放图像:

    # Resize images
    images32 = [transform.resize(image, (28, 28)) for image in images]
    images32 = np.array(images32)
    print(images32[0])
    

    查看缩放后的图像

    # Import `matplotlib`
    import matplotlib.pyplot as plt
    
    # Determine the (random) indexes of the images
    traffic_signs = [300, 2250, 3650, 4000]
    
    # Fill out the subplots with the random images and add shape, min and max values
    for i in range(len(traffic_signs)):
        plt.subplot(1, 4, i+1)
        plt.axis('off')
        plt.imshow(images32[traffic_signs[i]])
        plt.subplots_adjust(wspace=0.5)
        plt.show()
        print("shape: {0}, min: {1}, max: {2}".format(images32[traffic_signs[i]].shape, 
                                                      images32[traffic_signs[i]].min(), 
                                                      images32[traffic_signs[i]].max()))
    
    image.png

    将彩色图像灰度化

    for i in range(len(traffic_signs)):
        plt.subplot(1, 4, i+1)
        plt.axis('off')
        plt.imshow(images32[traffic_signs[i]], cmap="gray")
        plt.subplots_adjust(wspace=0.5)
        
    plt.show()
    
    print(images32.shape)
    
    image.png

    使用Tensorflow训练一个神经网络(第一次接触,对下面的代码还不怎么看得懂)

    • 首先为输入和标签设置占位符(placeholder),因为,现在我们还不需要放入真实的数据集。当我们运行session的时候,占位符才被替换成真实的数据集中的值。
    • 然后,我们开始构建一个网络,我们首先通过flatten()函数将我们的输入变成一个列向量,也就是将[None, 28,28]转换成[Nine,784],也就是我们的灰度图的尺寸。
    • 当我们将输入变成列向量之后,我们就可以构建一个完全连接层,生成一个大小是[None,62]的logits。 Logits is the function operates on the unscaled output of earlier layers and that uses the relative scale to understand the units is linear.(自行理解。。)
    • 当构建了多层感知器之后,我们就可以定义loss function。损失函数的选择取决于我们的任务,在这个任务中,我们采用sparse_softmax_cross_entropy_with_logits(),这个函数计算logits和labels之间的sprse softmax cross entropy。也就是说,它测量的概率误差在离散分类任务的类是互斥的。这意味着每一个标签只属于一个类。通过reduce_mean()可以计算张量各维度元素的平均值。
    • 然后,需要定义一个训练优化器(training optimizer),一些比较流行的优化器有SGD(随机梯度下降)、ADAM、RMSprop。使用不同的算法,我们需要调整不同的参数。在这个例子中,我们采用ADAM优化器,并且定义学习率是0.001
    • 最后,在真正训练我们的训练数据之前,我们需要初始化要执行的操作。
    import tensorflow as tf
    x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28])
    y = tf.placeholder(dtype = tf.int32, shape = [None])
    images_flat = tf.contrib.layers.flatten(x)
    logits = tf.contrib.layers.fully_connected(images_flat, 62, tf.nn.relu)
    loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels = y, logits = logits))
    train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
    correct_pred = tf.argmax(logits, 1)
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
    
    print("images_flat: ", images_flat)
    print("logits: ", logits) 
    print("loss: ", loss)
    print("predicted_labels: ", correct_pred)
    
    

    运行神经网络

    sess = tf.Session()
    
    sess.run(tf.global_variables_initializer())
    
    for i in range(201):
            print('EPOCH', i)
            _, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: images32, y: labels})
            if i % 10 == 0:
                print("Loss: ", loss)
            print('DONE WITH EPOCH')
    

    评估神经网络

    # Pick 10 random images
    sample_indexes = random.sample(range(len(images32)), 10)
    sample_images = [images32[i] for i in sample_indexes]
    sample_labels = [labels[i] for i in sample_indexes]
    
    # Run the "predicted_labels" op.
    predicted = sess.run([correct_pred], feed_dict={x: sample_images})[0]
                            
    # Print the real and predicted labels
    print(sample_labels)
    print(predicted)
    

    展示预测结果

    # Display the predictions and the ground truth visually.
    fig = plt.figure(figsize=(10, 10))
    for i in range(len(sample_images)):
        truth = sample_labels[i]
        prediction = predicted[i]
        plt.subplot(5, 2,1+i)
        plt.axis('off')
        color='green' if truth == prediction else 'red'
        plt.text(40, 10, "Truth:        {0}\nPrediction: {1}".format(truth, prediction), 
                 fontsize=12, color=color)
        plt.imshow(sample_images[i])
    
    plt.show()
    
    image.png

    预测测试集

    # Load the test data
    test_images, test_labels = load_data(test_data_dir)
    
    # Transform the images to 28 by 28 pixels
    test_images28 = [transform.resize(image, (28, 28)) for image in test_images]
    
    # Convert to grayscale
    from skimage.color import rgb2gray
    test_images28 = rgb2gray(np.array(test_images28))
    
    # Run predictions against the full test set.
    predicted = sess.run([correct_pred], feed_dict={x: test_images28})[0]
    
    # Calculate correct matches 
    match_count = sum([int(y == y_) for y, y_ in zip(test_labels, predicted)])
    
    # Calculate the accuracy
    accuracy = match_count / len(test_labels)
    
    # Print the accuracy
    print("Accuracy: {:.3f}".format(accuracy))
    
    预测结果

    关闭session

    sess.close()
    

    预测的准确率大概是57.8%

    参考:
    https://www.datacamp.com/community/tutorials/tensorflow-tutorial#comment-2999

    相关文章

      网友评论

          本文标题:使用tensorflow搭建一个神经网络,实现一个分类问题

          本文链接:https://www.haomeiwen.com/subject/sncyqqtx.html