台湾大学李宏毅手写数字辨识课堂演示demo

作者: 上行彩虹人 | 来源:发表于2018-08-02 18:29 被阅读13次

导入相应的包:

#这一行是我tensorflow的问题,其实不用导入
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import numpy as np
import keras
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import Convolution2D, MaxPooling2D, Flatten
from keras.optimizers import SGD, Adam
from keras.utils import np_utils
from keras.datasets import mnist

价值数据,因为mnist好像被墙了,所以是下载好数据放入MNIST_data文件夹之中的。

def load_data():
   #网络下载失败
   #(x_train, y_train), (x_test, y_test) = mnist.load_data('mnist.npz')
   #读取本地数据
    path = './mnist.npz'
    f = np.load(path)
    x_train, y_train = f['x_train'], f['y_train']
    x_test, y_test = f['x_test'], f['y_test']
    f.close()
    number = 10000
    x_train = x_train[0: number]
    y_train = y_train[0: number]
    x_train = x_train.reshape(number, 28 * 28)
    x_test = x_test.reshape(x_test.shape[0], 28 * 28)
    x_train = x_train.astype('float32')
    x_test = x_train.astype('float32')
    # convert class vectors to binary class matrices
    y_train = np_utils.to_categorical(y_train, 10)
    y_test = np_utils.to_categorical(y_test, 10)
    x_train = x_train
    x_test = x_test
    # x_test = np.random.normal(x_test)
    x_train = x_train / 255
    x_test = x_test / 255

    return (x_train, y_train), (x_test, y_test)

第一次调试代码:

(x_train, y_train), (x_test, y_test) = load_data()

#验证数据是够加载
print(x_train.shape)

#申明一个模型
model = Sequential()
model.add(Dense(input_dim=28 * 28, units=689, activation='sigmoid'))
model.add(Dense(units=689, activation='sigmoid'))
model.add(Dense(units=689, activation='sigmoid'))
model.add(Dense(units=10, activation='softmax'))

model.compile(loss='mse', optimizer=SGD(lr=0.1), metrics=['accuracy'])

#开始训练
model.fit(x_train, y_train, batch_size=100, epochs=20)

result = model.evaluate(x_train, y_train, batch_size=10000)
print('\nTrain Acc:', result[1])

result = model.evaluate(x_test, y_test, batch_size=10000)
print('\nTest Acc:', result[1])

结果:


image.png

第二次调试:
在训练集上表现都不好,没有训练起来,修改loss functon为categorical_crossentropy

# model.compile(loss='mse', optimizer=SGD(lr=0.1), metrics=['accuracy'])
model.compile(loss='categorical_crossentropy',
              optimizer=SGD(lr=0.1), metrics=['accuracy'])

结果:已经训练起来了,overfitting


image.png

第三次调试:调整batch_size
结果:并不是越大越好,batch_size为1是没有调用GPU

第四次调试:deep,添加10层hidden layer

for i in range(10):
 model.add(Dense(units=689, activation='sigmoid'))
model.add(Dense(units=10, activation='softmax'))

结果:training和testing都坏掉了:


image.png

第五次调试:修改activation,改为relu
relu可以避免神经网络后面过早收敛

model.add(Dense(input_dim=28 * 28, units=689, activation='relu'))
for i in range(10):
 model.add(Dense(units=689, activation='relu'))
model.add(Dense(units=10, activation='softmax'))

结果


image.png

第六次调试:加上dropout

#申明一个模型
model = Sequential()
model.add(Dense(input_dim=28 * 28, units=689, activation='relu'))
model.add(Dropout(0.7))
model.add(Dense(units=689, activation='relu'))
model.add(Dropout(0.7))
model.add(Dense(units=689, activation='relu'))
model.add(Dropout(0.7))
# for i in range(10):
#  model.add(Dense(units=689, activation='relu'))
model.add(Dense(units=10, activation='softmax'))

结果:和上面差不多,并没有像老师演示的那样在testing set上也表现比较好,有没有大佬来给我讲讲!!!谢谢!!

相关文章

网友评论

本文标题:台湾大学李宏毅手写数字辨识课堂演示demo

本文链接:https://www.haomeiwen.com/subject/saysvftx.html