Tensorflow实践:用神经网络训练分类器

作者: lyy0905 | 来源:发表于2017-07-14 22:09 被阅读0次

    任务:

    使用tensorflow训练一个神经网络作为分类器,分类的数据点如下:


    螺旋形数据点

    原理:

    数据点一共有三个类别,而且是螺旋形交织在一起,显然是线性不可分的,需要一个非线性的分类器。这里选择神经网络。
    输入的数据点是二维的,因此每个点只有x,y坐标这个原始特征。这里设计的神经网络有两个隐藏层,每层有50个神经元,足够抓住数据点的高维特征(实际上每层10个都够用了)。最后输出层是一个逻辑回归,根据隐藏层计算出的50个特征来预测数据点的分类(红、黄、蓝)。
    一般训练数据多的话,应该用随机梯度下降来训练神经网络,这里训练数据较少(300),就直接批量梯度下降了。

    # 导入包、初始化
    import numpy as np
    import matplotlib.pyplot as plt
    import tensorflow as tf
    
    %matplotlib inline
    plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
    plt.rcParams['image.interpolation'] = 'nearest'
    plt.rcParams['image.cmap'] = 'gray'
    
    
    # 生成螺旋形的线形不可分数据点
    np.random.seed(0)
    N = 100 # 每个类的数据个数
    D = 2 # 输入维度
    K = 3 # 类的个数
    X = np.zeros((N*K,D))
    num_train_examples = X.shape[0]
    y = np.zeros(N*K, dtype='uint8')
    for j in xrange(K):
      ix = range(N*j,N*(j+1))
      r = np.linspace(0.0,1,N) # radius
      t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
      X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
      y[ix] = j
    fig = plt.figure()
    plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
    plt.xlim([-1,1])
    plt.ylim([-1,1])
    
    螺旋形数据点

    打印输出输入X和label的shape

    num_label = 3
    labels = (np.arange(num_label) == y[:,None]).astype(np.float32)
    labels.shape
    
    (300, 3)
    
    X.shape
    
    (300, 2)
    

    用tensorflow构建神经网络

    import math
    
    N = 100 # 每个类的数据个数
    D = 2 # 输入维度
    num_label = 3 # 类的个数
    num_data = N * num_label
    hidden_size_1 = 50
    hidden_size_2 = 50
    
    beta = 0.001 # L2 正则化系数
    learning_rate = 0.1 # 学习速率
    
    labels = (np.arange(num_label) == y[:,None]).astype(np.float32)
    
    graph = tf.Graph()
    with graph.as_default():
        x = tf.constant(X.astype(np.float32))
        tf_labels = tf.constant(labels)
        
        # 隐藏层1
        hidden_layer_weights_1 = tf.Variable(
        tf.truncated_normal([D, hidden_size_1], stddev=math.sqrt(2.0/num_data)))
        hidden_layer_bias_1 = tf.Variable(tf.zeros([hidden_size_1]))
        
        # 隐藏层2
        hidden_layer_weights_2 = tf.Variable(
        tf.truncated_normal([hidden_size_1, hidden_size_2], stddev=math.sqrt(2.0/hidden_size_1)))
        hidden_layer_bias_2 = tf.Variable(tf.zeros([hidden_size_2]))
        
        # 输出层
        out_weights = tf.Variable(
        tf.truncated_normal([hidden_size_2, num_label], stddev=math.sqrt(2.0/hidden_size_2)))
        out_bias = tf.Variable(tf.zeros([num_label]))
        
        z1 = tf.matmul(x, hidden_layer_weights_1) + hidden_layer_bias_1
        h1 = tf.nn.relu(z1)
        
        z2 = tf.matmul(h1, hidden_layer_weights_2) + hidden_layer_bias_2
        h2 = tf.nn.relu(z2)
        
        logits = tf.matmul(h2, out_weights) + out_bias
        
        # L2正则化
        regularization = tf.nn.l2_loss(hidden_layer_weights_1) + tf.nn.l2_loss(hidden_layer_weights_2) + tf.nn.l2_loss(out_weights)
        loss = tf.reduce_mean(
            tf.nn.softmax_cross_entropy_with_logits(labels=tf_labels, logits=logits) + beta * regularization) 
        
        optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
        
        train_prediction = tf.nn.softmax(logits)
    
        weights = [hidden_layer_weights_1, hidden_layer_bias_1, hidden_layer_weights_2, hidden_layer_bias_2, out_weights, out_bias]
            
        
    

    上一步相当于搭建了神经网络的骨架,现在需要训练。每1000步训练,打印交叉熵损失和正确率。

    num_steps = 50000
    
    def accuracy(predictions, labels):
        return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
              / predictions.shape[0])
    
    def relu(x):
        return np.maximum(0,x)
              
    
    with tf.Session(graph=graph) as session:
        tf.global_variables_initializer().run()
        print('Initialized')
        for step in range(num_steps):
            _, l, predictions = session.run([optimizer, loss, train_prediction])
        
            if (step % 1000 == 0):
                print('Loss at step %d: %f' % (step, l))
                print('Training accuracy: %.1f%%' % accuracy(
                    predictions, labels))
            
        w1, b1, w2, b2, w3, b3 = weights
        # 显示分类器
        h = 0.02
        x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
        y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
        xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                             np.arange(y_min, y_max, h))
    
        Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], w1.eval()) + b1.eval()), w2.eval()) + b2.eval()), w3.eval()) + b3.eval()
        Z = np.argmax(Z, axis=1)
        Z = Z.reshape(xx.shape)
        fig = plt.figure()
        plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
        plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
        plt.xlim(xx.min(), xx.max())
        plt.ylim(yy.min(), yy.max())
    
    
    Initialized
    Loss at step 0: 1.132545
    Training accuracy: 43.7%
    Loss at step 1000: 0.257016
    Training accuracy: 94.0%
    Loss at step 2000: 0.165511
    Training accuracy: 98.0%
    Loss at step 3000: 0.149266
    Training accuracy: 99.0%
    Loss at step 4000: 0.142311
    Training accuracy: 99.3%
    Loss at step 5000: 0.137762
    Training accuracy: 99.3%
    Loss at step 6000: 0.134356
    Training accuracy: 99.3%
    Loss at step 7000: 0.131588
    Training accuracy: 99.3%
    Loss at step 8000: 0.129299
    Training accuracy: 99.3%
    Loss at step 9000: 0.127340
    Training accuracy: 99.3%
    Loss at step 10000: 0.125686
    Training accuracy: 99.3%
    Loss at step 11000: 0.124293
    Training accuracy: 99.3%
    Loss at step 12000: 0.123130
    Training accuracy: 99.3%
    Loss at step 13000: 0.122149
    Training accuracy: 99.3%
    Loss at step 14000: 0.121309
    Training accuracy: 99.3%
    Loss at step 15000: 0.120542
    Training accuracy: 99.3%
    Loss at step 16000: 0.119895
    Training accuracy: 99.3%
    Loss at step 17000: 0.119335
    Training accuracy: 99.3%
    Loss at step 18000: 0.118836
    Training accuracy: 99.3%
    Loss at step 19000: 0.118376
    Training accuracy: 99.3%
    Loss at step 20000: 0.117974
    Training accuracy: 99.3%
    Loss at step 21000: 0.117601
    Training accuracy: 99.3%
    Loss at step 22000: 0.117253
    Training accuracy: 99.3%
    Loss at step 23000: 0.116887
    Training accuracy: 99.3%
    Loss at step 24000: 0.116561
    Training accuracy: 99.3%
    Loss at step 25000: 0.116265
    Training accuracy: 99.3%
    Loss at step 26000: 0.115995
    Training accuracy: 99.3%
    Loss at step 27000: 0.115750
    Training accuracy: 99.3%
    Loss at step 28000: 0.115521
    Training accuracy: 99.3%
    Loss at step 29000: 0.115310
    Training accuracy: 99.3%
    Loss at step 30000: 0.115111
    Training accuracy: 99.3%
    Loss at step 31000: 0.114922
    Training accuracy: 99.3%
    Loss at step 32000: 0.114743
    Training accuracy: 99.3%
    Loss at step 33000: 0.114567
    Training accuracy: 99.3%
    Loss at step 34000: 0.114401
    Training accuracy: 99.3%
    Loss at step 35000: 0.114242
    Training accuracy: 99.3%
    Loss at step 36000: 0.114086
    Training accuracy: 99.3%
    Loss at step 37000: 0.113933
    Training accuracy: 99.3%
    Loss at step 38000: 0.113785
    Training accuracy: 99.3%
    Loss at step 39000: 0.113644
    Training accuracy: 99.3%
    Loss at step 40000: 0.113504
    Training accuracy: 99.3%
    Loss at step 41000: 0.113366
    Training accuracy: 99.3%
    Loss at step 42000: 0.113229
    Training accuracy: 99.3%
    Loss at step 43000: 0.113096
    Training accuracy: 99.3%
    Loss at step 44000: 0.112966
    Training accuracy: 99.3%
    Loss at step 45000: 0.112838
    Training accuracy: 99.3%
    Loss at step 46000: 0.112711
    Training accuracy: 99.3%
    Loss at step 47000: 0.112590
    Training accuracy: 99.3%
    Loss at step 48000: 0.112472
    Training accuracy: 99.3%
    Loss at step 49000: 0.112358
    Training accuracy: 99.3%
    
    分类器.png

    相关文章

      网友评论

        本文标题:Tensorflow实践:用神经网络训练分类器

        本文链接:https://www.haomeiwen.com/subject/cxgghxtx.html