美文网首页
2018-04-18 第四周

2018-04-18 第四周

作者: hobxzzy | 来源:发表于2018-06-14 17:03 被阅读0次

            参数设计好之后,需要理解tensorflow存储数据的方式:使用占位符(参考tensorflow的英文文档

    # x y placeholder

    x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])

    y = tf.placeholder(tf.float32, [None, n_classes])

            设置初始权重和偏置:

    # 对 weights biases 初始值的定义

    weights = {

    # shape (28, 128)

        'in': tf.Variable(tf.random_normal([n_inputs, n_hidden_units])),

    # shape (128, 10)

        'out': tf.Variable(tf.random_normal([n_hidden_units, n_classes]))

    }

    biases = {

    # shape (128, )

        'in': tf.Variable(tf.constant(0.1,shape=[n_hidden_units, ])),

    # shape (10, )

        'out': tf.Variable(tf.constant(0.1,shape=[n_classes, ]))

    }

    随后,导入前两周设计后的数据:

    def init_Vec():

        data_vec = open('vectors.txt')

        line_vec = data_vec.readlines()

        vec_list = []

        for s in line_vec:

            num = s.split(" ")

            #print(num)

            #num.remove("\n")

            num = list(map(float, num))

            vec_list.append(num)

        return vec_list

    def init_Tag():

        data_tag = open('tag.txt')

        line_tag = data_tag.readlines()

        tag_list = []

        for s in line_tag:

            num = int(s)

            if num == 0:

                tag_list.append([0, 0, 1])

            if num == 1:

                tag_list.append([0, 1, 0])

            if num == 2:

                tag_list.append([1, 0, 0])

        return tag_list

    RNN定义:

            首先,为什么需要矩阵转换呢,我们可以得知n_steps*n_inputs是向量的长度,我们每次输入仅仅是1/n_steps的数据,而我们需要一整块向量来计算最终的结果,需要用上一次训练好的权重,偏执来计算,然后在这个基础上在进行拟合计算,可以根据lstm的结构看出。

    def RNN(X, weights, biases):

        # 原始的 X 是 3 维数据, 我们需要把它变成 2 维数据才能使用 weights 的矩阵乘法

        X = tf.reshape(X, [-1, n_inputs])

        # X_in = W*X + b

        X_in = tf.matmul(X, weights['in']) + biases['in']

        # X_in ==> (128 batches, 28 steps, 128 hidden) 换回3维

        X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units])

        # 使用 basic LSTM Cell.

        lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True)

        init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)  # 初始化全零 state

        outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, X_in, initial_state=init_state, time_major=False)

        results = tf.matmul(final_state[1], weights['out']) + biases['out']

        return results

    最后,定义main函数,即可开始训练:

    with tf.Session() as sess:

        sess.run(init)

        step = 1

        while step * batch_size < max_samples:

            batch_x, batch_y = mnist.train.next_batch(batch_size)

            batch_x = batch_x.reshape((batch_size, n_steps, n_input))

            sess.run(optimizer, feed_dict = {x: batch_x, y: batch_y})

            if step % display_step == 0:

                acc = sess.run(accuracy, feed_dict = {x: batch_x, y: batch_y})

                loss = sess.run(cost, feed_dict = {x: batch_x, y: batch_y})

                print("Iter" + str(step * batch_size) + ", Minibatch Loss = " + \

                    "{:.6f}".format(loss) + ", Training Accuracy = " + \

                    "{:.5f}".format(acc))

            step += 1

        print("Optimization Finished!")

        test_len = 10000

        test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))

        test_label = mnist.test.labels[:test_len]

        print("Testing Accuracy:", sess.run(accuracy, feed_dict = {x: test_data, y: test_label}))

    现在,我们可以看一下训练的结果:

    可以看出训练的效果还是不错,在自己的训练集上做测试出来结果还算满意。

    相关文章

      网友评论

          本文标题:2018-04-18 第四周

          本文链接:https://www.haomeiwen.com/subject/hlvyeftx.html