章节6:Keras神经网络
Keras介绍
在这一节里,我们将介绍Keras,机器学习中一个高级的库。Keras是在Tensorflow的基础上建立的,也是一个开源的框架,内部实现了许多机器学习的方法,尤其是在深度神经网络方面,优化了计算的效率。为了我们快速入门的目的,我们不会去深入理解Keras框架内部的具体细节。Tensorflow是一个比较低级的库,因此,Keras通过将Tensorflow包装,提供给了我们一系列便利的函数来实现机器学习中一些可重复使用的环节,从而降低了我们编写神经网络的复杂度。更重要的是,在实现反向传播时,Keras是在GPU上训练神经网络的,所以比Tensorflow更高效。
我们使用上一章节的例子,用神经网络分类Iris数据集,但这次用Keras实现一遍,来入门Keras。先导入相关的库,
import os
os.environ['KERAS_BACKEND'] = 'tensorflow'
import matplotlib.pyplot as plt
import numpy as np
import random
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
导入Iris数据集,
from sklearn.datasets import load_iris
iris = load_iris()
data, labels = iris.data[:,0:3], iris.data[:,3]
在上一章节中,我们手动配置并训练了一个神经网络,来预测鸢尾花的花萼宽度。这次使用Keras库来实现一遍(输入变量增加至3个),首先,我们需要重新洗牌数据(去除数据集原先存在的一些特性)和预处理数据,预处理在该数据集上主要指正规化数据,并将其转换成正确的格式。
num_samples = len(labels) # size of our dataset
shuffle_order = np.random.permutation(num_samples)
data = data[shuffle_order, :]
labels = labels[shuffle_order]
# normalize data and labels to between 0 and 1 and make sure it's float32
data = data / np.amax(data, axis=0)
data = data.astype('float32')
labels = labels / np.amax(labels, axis=0)
labels = labels.astype('float32')
# print out the data
print("shape of X", data.shape)
print("first 5 rows of X\n", data[0:5, :])
print("first 5 labels\n", labels[0:5])
shape of X (150, 3)
first 5 rows of X
[[0.79746836 0.77272725 0.8115942 ]
[0.6962025 0.5681818 0.5797101 ]
[0.96202534 0.6818182 0.95652175]
[0.8101266 0.70454544 0.79710144]
[0.5949367 0.72727275 0.23188406]]
first 5 labels
[0.96 0.52 0.84 0.72 0.08]
过拟合和测试
在我们之前的章节中,我们总是在训练集上评估我们网络的性能。但这是不正确的,因为我们的网络能在训练集上过度拟合(像记忆这些数据一样)从而获得一个高的分数,来“欺骗”我们。当遇到未知的数据样本时,神经网络就不能很好的泛化了。
在机器学习中,这就叫做"过拟合",但我们可以通过以下几种方法来避免过拟合。首先,我们可以把数据集分割成“训练集”——在训练集上使用梯度下降法来训练网络,“测试集”——在测试集上做最后的评估得到网络对未知数据样本准确的估计。
先让我们来分割数据吧,使用前30%的数据作为测试集,其余的作为训练集,
# let's rename the data and labels to X, y
X, y = data, labels
test_split = 0.3 # percent split
n_test = int(test_split * num_samples)
x_train, x_test = X[n_test:, :], X[:n_test, :]
y_train, y_test = y[n_test:], y[:n_test]
print('%d training samples, %d test samples' % (x_train.shape[0], x_test.shape[0]))
105 training samples, 45 test samples
在Keras中,可以使用Sequential类来实例化一个神经网络模型,Sequential类是一个从输入层传播到输出层的简单的网络模型。
model = Sequential()
现在,我们有一个空的神经网络模型model,让我们往其中加入第一层,作为我们的输入层。我们可以使用Keras的Dense类来实例化我们的输入层。
Dense类是一个内部全连接的网络层,该层的神经元与前面一层的神经元全部连接了起来,因此叫做“Dense”(密集型)。对此,也许你会感到困惑,因为我们还没见过没有全连接的网络层,没事!我们将会在下面章节中介绍卷积神经网络。
创建一个Dense网络层,我们有两个参数需要指定,神经元个数和激活函数。对于输入层,我们还要指定输入数据的维度。
model.add(Dense(8, activation='sigmoid', input_dim=3))
我们可以使用model.summary函数读出我们当前网络的状态:
model.summary()
从输出可以看出,我们的网络目前只有一层,共有32个参数:输入层3个神经元,乘上中间层的8个神经元(3x8=24),再加上8个偏差系数(24+8=32)。
接下来,我们将加入输出层,也是一个全连接网络层,但只有一个神经元,该神经元包含我们最后的输出。这次的激活函数不再是sigmoid函数,而是“linear”激活函数,
model.add(Dense(1, activation='linear'))
model.summary()
所以,我们加入了9个参数,隐藏层到输出层的8x1个权重参数,输出层的1个偏差参数,因此,一共41个参数。
现在我们已经构建了模型的基本结构。接下来我们需要指定我们的损失函数和优化器,然后编译我们的模型。
首先,我们来指定损失函数。线性回归问题标准的损失函数是平方和误差(SSE)和均方误差(MSE)函数。SSE和MSE基本相同,唯一的不同点是它们之间差了一个缩放因子。Keras更倾向于用MSE,所以我们将使用它。
优化器是我们所选择的梯度下降的方式,最基本的优化器是“随机梯度下降”(SGD),我们到目前最常使用的是批量梯度下降,指的是在整个数据集上计算梯度(当深入了解机器学习算法时,你会对这些概念有更清晰的理解)。在训练集上取子集计算梯度的方法叫小批量梯度下降。
一旦我们指定了损失函数和优化器,我们的模型就可以编译了。编译具体指Keras开始分配内存给我们构建的模型的“计算图(神经网络图)”,为后面的计算提供了优化。
model.compile(loss='mean_squared_error', optimizer='sgd')
最后,我们准备好训练模型了。使用fit命令将开始训练的过程,fit函数有几个重要的参数要指定,首先是训练数据(x_train和y_train)和测试数据(x_test和y_test)。另外,我们必须指定batch_size参数,该参数会在训练集中取指定batch_size的样本来计算梯度(使用SGD),还有epochs参数,用来指定训练样本数据的次数。通常来说,epochs越大越好。
因为我们的数据集比较小(105个样本),所以我们应该设置一个小的batch_size和一个大的epochs:
history = model.fit(x_train, y_train,
batch_size=4,
epochs=200,
verbose=1,
validation_data=(x_test, y_test))
Train on 105 samples, validate on 45 samples
Epoch 1/200
105/105 [==============================] - 0s 2ms/step - loss: 0.1224 - val_loss: 0.1100
Epoch 2/200
105/105 [==============================] - 0s 310us/step - loss: 0.0996 - val_loss: 0.1084
Epoch 3/200
105/105 [==============================] - 0s 406us/step - loss: 0.0991 - val_loss: 0.1059
Epoch 4/200
105/105 [==============================] - 0s 307us/step - loss: 0.0966 - val_loss: 0.1050
Epoch 5/200
105/105 [==============================] - 0s 305us/step - loss: 0.0956 - val_loss: 0.1038
Epoch 6/200
105/105 [==============================] - 0s 323us/step - loss: 0.0945 - val_loss: 0.1023
Epoch 7/200
105/105 [==============================] - 0s 307us/step - loss: 0.0938 - val_loss: 0.1010
Epoch 8/200
105/105 [==============================] - 0s 314us/step - loss: 0.0922 - val_loss: 0.1000
Epoch 9/200
105/105 [==============================] - 0s 324us/step - loss: 0.0908 - val_loss: 0.0990
Epoch 10/200
105/105 [==============================] - 0s 310us/step - loss: 0.0900 - val_loss: 0.0975
Epoch 11/200
105/105 [==============================] - 0s 314us/step - loss: 0.0888 - val_loss: 0.0966
Epoch 12/200
105/105 [==============================] - 0s 416us/step - loss: 0.0880 - val_loss: 0.0957
Epoch 13/200
105/105 [==============================] - 0s 416us/step - loss: 0.0869 - val_loss: 0.0942
Epoch 14/200
105/105 [==============================] - 0s 351us/step - loss: 0.0857 - val_loss: 0.0930
Epoch 15/200
105/105 [==============================] - 0s 327us/step - loss: 0.0850 - val_loss: 0.0919
Epoch 16/200
105/105 [==============================] - 0s 327us/step - loss: 0.0841 - val_loss: 0.0916
Epoch 17/200
105/105 [==============================] - 0s 336us/step - loss: 0.0832 - val_loss: 0.0898
Epoch 18/200
105/105 [==============================] - 0s 335us/step - loss: 0.0818 - val_loss: 0.0891
Epoch 19/200
105/105 [==============================] - 0s 324us/step - loss: 0.0813 - val_loss: 0.0876
Epoch 20/200
105/105 [==============================] - 0s 332us/step - loss: 0.0797 - val_loss: 0.0874
Epoch 21/200
105/105 [==============================] - 0s 334us/step - loss: 0.0796 - val_loss: 0.0863
Epoch 22/200
105/105 [==============================] - 0s 364us/step - loss: 0.0783 - val_loss: 0.0854
Epoch 23/200
105/105 [==============================] - 0s 339us/step - loss: 0.0776 - val_loss: 0.0835
Epoch 24/200
105/105 [==============================] - 0s 360us/step - loss: 0.0761 - val_loss: 0.0825
Epoch 25/200
105/105 [==============================] - 0s 359us/step - loss: 0.0753 - val_loss: 0.0816
Epoch 26/200
105/105 [==============================] - 0s 340us/step - loss: 0.0741 - val_loss: 0.0810
Epoch 27/200
105/105 [==============================] - 0s 322us/step - loss: 0.0734 - val_loss: 0.0796
Epoch 28/200
105/105 [==============================] - 0s 364us/step - loss: 0.0725 - val_loss: 0.0787
Epoch 29/200
105/105 [==============================] - 0s 330us/step - loss: 0.0715 - val_loss: 0.0778
Epoch 30/200
105/105 [==============================] - 0s 339us/step - loss: 0.0712 - val_loss: 0.0768
Epoch 31/200
105/105 [==============================] - 0s 355us/step - loss: 0.0698 - val_loss: 0.0759
Epoch 32/200
105/105 [==============================] - 0s 333us/step - loss: 0.0693 - val_loss: 0.0752
Epoch 33/200
105/105 [==============================] - 0s 341us/step - loss: 0.0683 - val_loss: 0.0743
Epoch 34/200
105/105 [==============================] - 0s 349us/step - loss: 0.0674 - val_loss: 0.0731
Epoch 35/200
105/105 [==============================] - 0s 334us/step - loss: 0.0665 - val_loss: 0.0722
Epoch 36/200
105/105 [==============================] - 0s 350us/step - loss: 0.0655 - val_loss: 0.0714
Epoch 37/200
105/105 [==============================] - 0s 339us/step - loss: 0.0650 - val_loss: 0.0712
Epoch 38/200
105/105 [==============================] - 0s 362us/step - loss: 0.0641 - val_loss: 0.0698
Epoch 39/200
105/105 [==============================] - 0s 381us/step - loss: 0.0631 - val_loss: 0.0688
Epoch 40/200
105/105 [==============================] - 0s 414us/step - loss: 0.0627 - val_loss: 0.0679
Epoch 41/200
105/105 [==============================] - 0s 332us/step - loss: 0.0616 - val_loss: 0.0671
Epoch 42/200
105/105 [==============================] - 0s 336us/step - loss: 0.0611 - val_loss: 0.0665
Epoch 43/200
105/105 [==============================] - 0s 350us/step - loss: 0.0601 - val_loss: 0.0654
Epoch 44/200
105/105 [==============================] - 0s 397us/step - loss: 0.0596 - val_loss: 0.0646
Epoch 45/200
105/105 [==============================] - 0s 404us/step - loss: 0.0586 - val_loss: 0.0638
Epoch 46/200
105/105 [==============================] - 0s 375us/step - loss: 0.0582 - val_loss: 0.0635
Epoch 47/200
105/105 [==============================] - 0s 348us/step - loss: 0.0577 - val_loss: 0.0621
Epoch 48/200
105/105 [==============================] - 0s 333us/step - loss: 0.0568 - val_loss: 0.0614
Epoch 49/200
105/105 [==============================] - 0s 347us/step - loss: 0.0556 - val_loss: 0.0610
Epoch 50/200
105/105 [==============================] - 0s 337us/step - loss: 0.0547 - val_loss: 0.0603
Epoch 51/200
105/105 [==============================] - 0s 373us/step - loss: 0.0545 - val_loss: 0.0592
Epoch 52/200
105/105 [==============================] - 0s 350us/step - loss: 0.0536 - val_loss: 0.0581
Epoch 53/200
105/105 [==============================] - 0s 338us/step - loss: 0.0529 - val_loss: 0.0574
Epoch 54/200
105/105 [==============================] - 0s 345us/step - loss: 0.0520 - val_loss: 0.0574
Epoch 55/200
105/105 [==============================] - 0s 334us/step - loss: 0.0518 - val_loss: 0.0560
Epoch 56/200
105/105 [==============================] - 0s 330us/step - loss: 0.0508 - val_loss: 0.0551
Epoch 57/200
105/105 [==============================] - 0s 340us/step - loss: 0.0499 - val_loss: 0.0547
Epoch 58/200
105/105 [==============================] - 0s 351us/step - loss: 0.0495 - val_loss: 0.0538
Epoch 59/200
105/105 [==============================] - 0s 341us/step - loss: 0.0487 - val_loss: 0.0529
Epoch 60/200
105/105 [==============================] - 0s 335us/step - loss: 0.0475 - val_loss: 0.0527
Epoch 61/200
105/105 [==============================] - 0s 346us/step - loss: 0.0475 - val_loss: 0.0518
Epoch 62/200
105/105 [==============================] - 0s 317us/step - loss: 0.0467 - val_loss: 0.0508
Epoch 63/200
105/105 [==============================] - 0s 323us/step - loss: 0.0460 - val_loss: 0.0509
Epoch 64/200
105/105 [==============================] - 0s 312us/step - loss: 0.0458 - val_loss: 0.0494
Epoch 65/200
105/105 [==============================] - 0s 316us/step - loss: 0.0447 - val_loss: 0.0487
Epoch 66/200
105/105 [==============================] - 0s 310us/step - loss: 0.0442 - val_loss: 0.0482
Epoch 67/200
105/105 [==============================] - 0s 339us/step - loss: 0.0435 - val_loss: 0.0478
Epoch 68/200
105/105 [==============================] - 0s 376us/step - loss: 0.0435 - val_loss: 0.0468
Epoch 69/200
105/105 [==============================] - 0s 329us/step - loss: 0.0424 - val_loss: 0.0462
Epoch 70/200
105/105 [==============================] - 0s 321us/step - loss: 0.0417 - val_loss: 0.0454
Epoch 71/200
105/105 [==============================] - 0s 413us/step - loss: 0.0410 - val_loss: 0.0451
Epoch 72/200
105/105 [==============================] - 0s 308us/step - loss: 0.0406 - val_loss: 0.0441
Epoch 73/200
105/105 [==============================] - 0s 343us/step - loss: 0.0400 - val_loss: 0.0435
Epoch 74/200
105/105 [==============================] - 0s 327us/step - loss: 0.0390 - val_loss: 0.0429
Epoch 75/200
105/105 [==============================] - 0s 354us/step - loss: 0.0386 - val_loss: 0.0422
Epoch 76/200
105/105 [==============================] - 0s 329us/step - loss: 0.0382 - val_loss: 0.0417
Epoch 77/200
105/105 [==============================] - 0s 325us/step - loss: 0.0375 - val_loss: 0.0410
Epoch 78/200
105/105 [==============================] - 0s 321us/step - loss: 0.0370 - val_loss: 0.0404
Epoch 79/200
105/105 [==============================] - 0s 338us/step - loss: 0.0362 - val_loss: 0.0398
Epoch 80/200
105/105 [==============================] - 0s 322us/step - loss: 0.0358 - val_loss: 0.0394
Epoch 81/200
105/105 [==============================] - 0s 327us/step - loss: 0.0354 - val_loss: 0.0386
Epoch 82/200
105/105 [==============================] - 0s 336us/step - loss: 0.0348 - val_loss: 0.0380
Epoch 83/200
105/105 [==============================] - 0s 341us/step - loss: 0.0343 - val_loss: 0.0378
Epoch 84/200
105/105 [==============================] - 0s 325us/step - loss: 0.0338 - val_loss: 0.0369
Epoch 85/200
105/105 [==============================] - 0s 344us/step - loss: 0.0334 - val_loss: 0.0364
Epoch 86/200
105/105 [==============================] - 0s 331us/step - loss: 0.0328 - val_loss: 0.0361
Epoch 87/200
105/105 [==============================] - 0s 347us/step - loss: 0.0323 - val_loss: 0.0356
Epoch 88/200
105/105 [==============================] - 0s 361us/step - loss: 0.0319 - val_loss: 0.0348
Epoch 89/200
105/105 [==============================] - 0s 335us/step - loss: 0.0315 - val_loss: 0.0342
Epoch 90/200
105/105 [==============================] - 0s 353us/step - loss: 0.0309 - val_loss: 0.0337
Epoch 91/200
105/105 [==============================] - 0s 329us/step - loss: 0.0305 - val_loss: 0.0332
Epoch 92/200
105/105 [==============================] - 0s 351us/step - loss: 0.0304 - val_loss: 0.0326
Epoch 93/200
105/105 [==============================] - 0s 313us/step - loss: 0.0294 - val_loss: 0.0322
Epoch 94/200
105/105 [==============================] - 0s 318us/step - loss: 0.0290 - val_loss: 0.0317
Epoch 95/200
105/105 [==============================] - 0s 350us/step - loss: 0.0287 - val_loss: 0.0312
Epoch 96/200
105/105 [==============================] - 0s 445us/step - loss: 0.0282 - val_loss: 0.0307
Epoch 97/200
105/105 [==============================] - 0s 337us/step - loss: 0.0277 - val_loss: 0.0303
Epoch 98/200
105/105 [==============================] - 0s 338us/step - loss: 0.0272 - val_loss: 0.0299
Epoch 99/200
105/105 [==============================] - 0s 334us/step - loss: 0.0270 - val_loss: 0.0293
Epoch 100/200
105/105 [==============================] - 0s 309us/step - loss: 0.0261 - val_loss: 0.0291
Epoch 101/200
105/105 [==============================] - 0s 336us/step - loss: 0.0260 - val_loss: 0.0287
Epoch 102/200
105/105 [==============================] - 0s 339us/step - loss: 0.0258 - val_loss: 0.0280
Epoch 103/200
105/105 [==============================] - 0s 324us/step - loss: 0.0253 - val_loss: 0.0277
Epoch 104/200
105/105 [==============================] - 0s 335us/step - loss: 0.0250 - val_loss: 0.0279
Epoch 105/200
105/105 [==============================] - 0s 387us/step - loss: 0.0247 - val_loss: 0.0268
Epoch 106/200
105/105 [==============================] - 0s 330us/step - loss: 0.0241 - val_loss: 0.0264
Epoch 107/200
105/105 [==============================] - 0s 311us/step - loss: 0.0237 - val_loss: 0.0260
Epoch 108/200
105/105 [==============================] - 0s 299us/step - loss: 0.0235 - val_loss: 0.0256
Epoch 109/200
105/105 [==============================] - 0s 349us/step - loss: 0.0230 - val_loss: 0.0252
Epoch 110/200
105/105 [==============================] - 0s 340us/step - loss: 0.0228 - val_loss: 0.0248
Epoch 111/200
105/105 [==============================] - 0s 326us/step - loss: 0.0224 - val_loss: 0.0244
Epoch 112/200
105/105 [==============================] - 0s 378us/step - loss: 0.0221 - val_loss: 0.0242
Epoch 113/200
105/105 [==============================] - 0s 316us/step - loss: 0.0218 - val_loss: 0.0242
Epoch 114/200
105/105 [==============================] - 0s 316us/step - loss: 0.0214 - val_loss: 0.0235
Epoch 115/200
105/105 [==============================] - 0s 312us/step - loss: 0.0212 - val_loss: 0.0229
Epoch 116/200
105/105 [==============================] - 0s 313us/step - loss: 0.0207 - val_loss: 0.0229
Epoch 117/200
105/105 [==============================] - 0s 304us/step - loss: 0.0204 - val_loss: 0.0222
Epoch 118/200
105/105 [==============================] - 0s 333us/step - loss: 0.0202 - val_loss: 0.0219
Epoch 119/200
105/105 [==============================] - 0s 406us/step - loss: 0.0197 - val_loss: 0.0219
Epoch 120/200
105/105 [==============================] - 0s 416us/step - loss: 0.0197 - val_loss: 0.0213
Epoch 121/200
105/105 [==============================] - 0s 374us/step - loss: 0.0192 - val_loss: 0.0209
Epoch 122/200
105/105 [==============================] - 0s 362us/step - loss: 0.0191 - val_loss: 0.0207
Epoch 123/200
105/105 [==============================] - 0s 338us/step - loss: 0.0189 - val_loss: 0.0203
Epoch 124/200
105/105 [==============================] - 0s 345us/step - loss: 0.0185 - val_loss: 0.0200
Epoch 125/200
105/105 [==============================] - 0s 352us/step - loss: 0.0183 - val_loss: 0.0198
Epoch 126/200
105/105 [==============================] - 0s 360us/step - loss: 0.0178 - val_loss: 0.0194
Epoch 127/200
105/105 [==============================] - 0s 339us/step - loss: 0.0177 - val_loss: 0.0192
Epoch 128/200
105/105 [==============================] - 0s 330us/step - loss: 0.0174 - val_loss: 0.0190
Epoch 129/200
105/105 [==============================] - 0s 333us/step - loss: 0.0171 - val_loss: 0.0186
Epoch 130/200
105/105 [==============================] - 0s 337us/step - loss: 0.0170 - val_loss: 0.0184
Epoch 131/200
105/105 [==============================] - 0s 353us/step - loss: 0.0166 - val_loss: 0.0181
Epoch 132/200
105/105 [==============================] - 0s 349us/step - loss: 0.0165 - val_loss: 0.0178
Epoch 133/200
105/105 [==============================] - 0s 360us/step - loss: 0.0161 - val_loss: 0.0176
Epoch 134/200
105/105 [==============================] - 0s 332us/step - loss: 0.0160 - val_loss: 0.0175
Epoch 135/200
105/105 [==============================] - 0s 307us/step - loss: 0.0158 - val_loss: 0.0171
Epoch 136/200
105/105 [==============================] - 0s 328us/step - loss: 0.0154 - val_loss: 0.0171
Epoch 137/200
105/105 [==============================] - 0s 325us/step - loss: 0.0152 - val_loss: 0.0166
Epoch 138/200
105/105 [==============================] - 0s 357us/step - loss: 0.0151 - val_loss: 0.0165
Epoch 139/200
105/105 [==============================] - 0s 363us/step - loss: 0.0149 - val_loss: 0.0163
Epoch 140/200
105/105 [==============================] - 0s 325us/step - loss: 0.0147 - val_loss: 0.0166
Epoch 141/200
105/105 [==============================] - 0s 336us/step - loss: 0.0146 - val_loss: 0.0168
Epoch 142/200
105/105 [==============================] - 0s 328us/step - loss: 0.0147 - val_loss: 0.0160
Epoch 143/200
105/105 [==============================] - 0s 336us/step - loss: 0.0144 - val_loss: 0.0154
Epoch 144/200
105/105 [==============================] - 0s 339us/step - loss: 0.0140 - val_loss: 0.0152
Epoch 145/200
105/105 [==============================] - 0s 326us/step - loss: 0.0138 - val_loss: 0.0151
Epoch 146/200
105/105 [==============================] - 0s 316us/step - loss: 0.0137 - val_loss: 0.0154
Epoch 147/200
105/105 [==============================] - 0s 318us/step - loss: 0.0136 - val_loss: 0.0148
Epoch 148/200
105/105 [==============================] - 0s 309us/step - loss: 0.0133 - val_loss: 0.0152
Epoch 149/200
105/105 [==============================] - 0s 305us/step - loss: 0.0132 - val_loss: 0.0145
Epoch 150/200
105/105 [==============================] - 0s 304us/step - loss: 0.0130 - val_loss: 0.0145
Epoch 151/200
105/105 [==============================] - 0s 323us/step - loss: 0.0128 - val_loss: 0.0143
Epoch 152/200
105/105 [==============================] - 0s 352us/step - loss: 0.0128 - val_loss: 0.0142
Epoch 153/200
105/105 [==============================] - 0s 307us/step - loss: 0.0125 - val_loss: 0.0136
Epoch 154/200
105/105 [==============================] - 0s 312us/step - loss: 0.0124 - val_loss: 0.0134
Epoch 155/200
105/105 [==============================] - 0s 300us/step - loss: 0.0123 - val_loss: 0.0133
Epoch 156/200
105/105 [==============================] - 0s 314us/step - loss: 0.0122 - val_loss: 0.0133
Epoch 157/200
105/105 [==============================] - 0s 315us/step - loss: 0.0120 - val_loss: 0.0129
Epoch 158/200
105/105 [==============================] - 0s 303us/step - loss: 0.0119 - val_loss: 0.0132
Epoch 159/200
105/105 [==============================] - 0s 313us/step - loss: 0.0118 - val_loss: 0.0127
Epoch 160/200
105/105 [==============================] - 0s 317us/step - loss: 0.0117 - val_loss: 0.0126
Epoch 161/200
105/105 [==============================] - 0s 321us/step - loss: 0.0116 - val_loss: 0.0131
Epoch 162/200
105/105 [==============================] - 0s 302us/step - loss: 0.0115 - val_loss: 0.0127
Epoch 163/200
105/105 [==============================] - 0s 307us/step - loss: 0.0113 - val_loss: 0.0122
Epoch 164/200
105/105 [==============================] - 0s 319us/step - loss: 0.0112 - val_loss: 0.0120
Epoch 165/200
105/105 [==============================] - 0s 311us/step - loss: 0.0111 - val_loss: 0.0120
Epoch 166/200
105/105 [==============================] - 0s 304us/step - loss: 0.0110 - val_loss: 0.0118
Epoch 167/200
105/105 [==============================] - 0s 329us/step - loss: 0.0108 - val_loss: 0.0116
Epoch 168/200
105/105 [==============================] - 0s 305us/step - loss: 0.0108 - val_loss: 0.0116
Epoch 169/200
105/105 [==============================] - 0s 310us/step - loss: 0.0107 - val_loss: 0.0118
Epoch 170/200
105/105 [==============================] - 0s 324us/step - loss: 0.0107 - val_loss: 0.0114
Epoch 171/200
105/105 [==============================] - 0s 308us/step - loss: 0.0106 - val_loss: 0.0112
Epoch 172/200
105/105 [==============================] - 0s 308us/step - loss: 0.0105 - val_loss: 0.0111
Epoch 173/200
105/105 [==============================] - 0s 314us/step - loss: 0.0104 - val_loss: 0.0111
Epoch 174/200
105/105 [==============================] - 0s 309us/step - loss: 0.0103 - val_loss: 0.0111
Epoch 175/200
105/105 [==============================] - 0s 314us/step - loss: 0.0102 - val_loss: 0.0110
Epoch 176/200
105/105 [==============================] - 0s 309us/step - loss: 0.0102 - val_loss: 0.0109
Epoch 177/200
105/105 [==============================] - 0s 313us/step - loss: 0.0101 - val_loss: 0.0108
Epoch 178/200
105/105 [==============================] - 0s 314us/step - loss: 0.0100 - val_loss: 0.0112
Epoch 179/200
105/105 [==============================] - 0s 302us/step - loss: 0.0100 - val_loss: 0.0107
Epoch 180/200
105/105 [==============================] - 0s 316us/step - loss: 0.0098 - val_loss: 0.0104
Epoch 181/200
105/105 [==============================] - 0s 315us/step - loss: 0.0098 - val_loss: 0.0107
Epoch 182/200
105/105 [==============================] - 0s 310us/step - loss: 0.0097 - val_loss: 0.0103
Epoch 183/200
105/105 [==============================] - 0s 317us/step - loss: 0.0096 - val_loss: 0.0104
Epoch 184/200
105/105 [==============================] - 0s 331us/step - loss: 0.0095 - val_loss: 0.0101
Epoch 185/200
105/105 [==============================] - 0s 299us/step - loss: 0.0094 - val_loss: 0.0104
Epoch 186/200
105/105 [==============================] - 0s 301us/step - loss: 0.0094 - val_loss: 0.0100
Epoch 187/200
105/105 [==============================] - 0s 328us/step - loss: 0.0094 - val_loss: 0.0102
Epoch 188/200
105/105 [==============================] - 0s 306us/step - loss: 0.0093 - val_loss: 0.0100
Epoch 189/200
105/105 [==============================] - 0s 302us/step - loss: 0.0093 - val_loss: 0.0099
Epoch 190/200
105/105 [==============================] - 0s 322us/step - loss: 0.0092 - val_loss: 0.0097
Epoch 191/200
105/105 [==============================] - 0s 315us/step - loss: 0.0092 - val_loss: 0.0097
Epoch 192/200
105/105 [==============================] - 0s 303us/step - loss: 0.0092 - val_loss: 0.0097
Epoch 193/200
105/105 [==============================] - 0s 307us/step - loss: 0.0091 - val_loss: 0.0098
Epoch 194/200
105/105 [==============================] - 0s 352us/step - loss: 0.0090 - val_loss: 0.0096
Epoch 195/200
105/105 [==============================] - 0s 313us/step - loss: 0.0089 - val_loss: 0.0100
Epoch 196/200
105/105 [==============================] - 0s 359us/step - loss: 0.0090 - val_loss: 0.0103
Epoch 197/200
105/105 [==============================] - 0s 341us/step - loss: 0.0089 - val_loss: 0.0096
Epoch 198/200
105/105 [==============================] - 0s 322us/step - loss: 0.0088 - val_loss: 0.0094
Epoch 199/200
105/105 [==============================] - 0s 318us/step - loss: 0.0088 - val_loss: 0.0093
Epoch 200/200
105/105 [==============================] - 0s 295us/step - loss: 0.0088 - val_loss: 0.0092
从上面的结果可以看到,我们将网络测试集的MSE训练到了小于0.01的程度,注意,上述结果同时显示了训练样本和测试样本的误差。训练误差比测试误差小是正常的,因为模型是在训练样本上训练的。但如果训练误差比测试误差小很多,就意味着出现了过拟合的现象。
我们可以使用evaluate评估模型在测试集上的损失,
score = model.evaluate(x_test, y_test)
print('Test loss:', score)
45/45 [==============================] - 0s 97us/step
Test loss: 0.00922720053543647
获取更原始的预测结果:
y_pred = model.predict(x_test)
for yp, ya in list(zip(y_pred, y_test))[0:10]:
print("predicted %0.2f, actual %0.2f" % (yp, ya))
predicted 0.72, actual 0.96
predicted 0.53, actual 0.52
predicted 0.87, actual 0.84
predicted 0.72, actual 0.72
predicted 0.16, actual 0.08
predicted 0.13, actual 0.08
predicted 0.13, actual 0.08
predicted 0.15, actual 0.08
predicted 0.62, actual 0.60
predicted 0.54, actual 0.52
我们也可以手动计算MSE,
def MSE(y_pred, y_test):
return (1.0/len(y_test)) * np.sum([((y1[0]-y2)**2) for y1, y2 in list(zip(y_pred, y_test))])
print("MSE is %0.4f" % MSE(y_pred, y_test))
MSE is 0.0092
也可以用以下方法预测单个样本的输出,
x_sample = x_test[0].reshape(1, 3) # shape must be (num_samples, 3), even if num_samples = 1
y_prob = model.predict(x_sample)
print("predicted %0.3f, actual %0.3f" % (y_prob[0][0], y_test[0]))
predicted 0.723, actual 0.960
到目前为止,我们介绍了Keras在回归问题上的应用。Keras库强大的作用将在我们后面学习到分类、卷积神经网络和其他各种优化时体现出来,让我们拭目以待吧!
网友评论