美文网首页
tensorflow2逻辑回归(三)

tensorflow2逻辑回归(三)

作者: 小小树苗儿 | 来源:发表于2020-07-14 10:55 被阅读0次

    数据集

    MNIST,它包含了 0~9 共 10 种数字的手写图片,每种数字一共有 7000 张图片, 采集自不同书写风格的真实手写图片, 一共 70000 张图片。

    考虑到手写数字图片包含的信息比较简单, 每张图片均被缩放到28 × 28的大小,同时
    只保留了灰度信息。

    单层网络实现代码

    import numpy as np
    import tensorflow as tf
    
    # Parameters
    learning_rate = 0.001
    training_epochs = 6
    batch_size = 600
    
    # Import MNIST data
    (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
    
    # tf.reshape(x_train, [-1, 784] 打平操作, [b, 28, 28] => [b, 784]
    train_dataset = (
        tf.data.Dataset.from_tensor_slices((tf.reshape(x_train, [-1, 784]), y_train))
        .batch(batch_size)
        .shuffle(1000)
    )
    
    # x缩放到0-1;y独热编码
    train_dataset = (
        train_dataset.map(lambda x, y:
                          (tf.divide(tf.cast(x, tf.float32), 255.0),
                           tf.reshape(tf.one_hot(y, 10), (-1, 10))))
    )
    
    # Set model weights
    W = tf.Variable(tf.zeros([784, 10]))
    b = tf.Variable(tf.zeros([10]))
    
    # Construct model
    model = lambda x: tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
    # Minimize error using cross entropy
    compute_loss = lambda true, pred: tf.reduce_mean(tf.reduce_sum(tf.losses.binary_crossentropy(true, pred), axis=-1))
    # caculate accuracy
    compute_accuracy = lambda true, pred: tf.reduce_mean(tf.keras.metrics.categorical_accuracy(true, pred))
    # Gradient Descent
    optimizer = tf.optimizers.Adam(learning_rate)
    
    for epoch in range(training_epochs):
        for i, (x_, y_) in enumerate(train_dataset):
            with tf.GradientTape() as tape:
                pred = model(x_)
                loss = compute_loss(y_, pred)
            acc = compute_accuracy(y_, pred)
            grads = tape.gradient(loss, [W, b])
            optimizer.apply_gradients(zip(grads, [W, b]))
        print("=> loss %.2f acc %.2f" %(loss.numpy(), acc.numpy()))
    

    高级API实现三层网络结构

    xx.jpg
    import numpy as np
    import tensorflow as tf
    
    # Import MNIST data
    (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
    
    model2 = tf.keras.models.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(256, activation='relu'),
        tf.keras.layers.Dense(128, activation='relu'),
        #tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10, activation="softmax")
    ])
    
    model2.compile(optimizer="adam",
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    
    model2.fit(x_train, y_train, epochs=10)
    

    相关文章

      网友评论

          本文标题:tensorflow2逻辑回归(三)

          本文链接:https://www.haomeiwen.com/subject/wfjxhktx.html