Tensorflow 简单数据拟合

作者: ac619467fef3 | 来源:发表于2019-02-02 14:21 被阅读62次

    一、查看原数据,打印查看

    源数据分布
    import numpy as np
    import matplotlib.pyplot as plt
    files = np.load("/TensorFlow作业/homework.npz")
    X = files['X']
    label = files['d']
    len = X.shape[0]
    plt.scatter(X[:,0],X[:,1],c=label)
    plt.show()
    

    二、三层网络进行拟合

    mport numpy as np
    import matplotlib.pyplot as plt
    files = np.load("/ensorFlow作业/homework.npz")
    X = files['X']
    label = files['d']
    len = X.shape[0]
    
    label_one_hot = []
    for x1, x2 in X:
        if x1 > 0 and x2 > 0:
            label_one_hot.append([1, 0])
        elif x1 < 0 and x2 < 0:
            label_one_hot.append([1, 0])
        else:
            label_one_hot.append([0, 1])
    label_one_hot = np.array(label_one_hot)
    
    import tensorflow as tf
    import tensorflow.contrib.slim as slim
    x = tf.placeholder(tf.float32, [None, 2], name="input_x")
    d = tf.placeholder(tf.float32, [None, 2], name="input_y")
    # 对于sigmoid激活函数而言,效果可能并不理想
    net = slim.fully_connected(x, 4, activation_fn=tf.nn.relu, 
                                  scope='full1', reuse=False)
    net = slim.fully_connected(net, 4, activation_fn=tf.nn.relu, 
                                  scope='full4', reuse=False)
    y = slim.fully_connected(net, 2, activation_fn=None, 
                                  scope='full5', reuse=False)
    # loss = tf.reduce_mean(tf.square(y-d))
    loss = tf.reduce_mean(-d*tf.log(tf.nn.softmax(y)))
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(d, 1)) 
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    optimizer = tf.train.GradientDescentOptimizer(0.01)
    gradient = optimizer.compute_gradients(loss, var_list=tf.trainable_variables())
    train_step = optimizer.apply_gradients(gradient)
    sess = tf.Session()
    sess.run(tf.global_variables_initializer())
    l = []
    a = []
    for itr in range(20000):
        idx = np.random.randint(0, 2000, 20)
        inx = X[idx]
        ind = label_one_hot[idx]
        if itr%10==0:
            _accuracy = sess.run(accuracy,feed_dict={d:label_one_hot,x:X})
            print("迭代:{} 准确率:{}".format(itr,_accuracy))
            _loss = sess.run(loss,feed_dict={d:ind,x:inx})
            l.append(_loss)
            a.append(_accuracy)
        sess.run(train_step,feed_dict={d:ind,x:inx})
    
    predict = sess.run(tf.argmax(y, 1),feed_dict={x:[[0.2,0.2]]})
    print("[02,0,2] 预测值为 %d" % predict)
    plt.plot(l)
    plt.plot(a)
    plt.show()
    

    讨论

    1. 学习率


      GradientDescentOptimizer_0.01.png
    GradientDescentOptimizer_0.1.png
    1. 损失函数,二范数 vs 交叉熵


      交叉熵 learn_rate 0.1
      交叉熵 learning_rate 0.01

    查看结果

    二范数作为loss函数,学习率为0.1 时就可以达到较好的预测效果。

    相关文章

      网友评论

        本文标题:Tensorflow 简单数据拟合

        本文链接:https://www.haomeiwen.com/subject/jtessqtx.html