美文网首页
【Tool】Keras 基础学习 I Sequential M

【Tool】Keras 基础学习 I Sequential M

作者: ItchyHiker | 来源:发表于2018-09-08 19:59 被阅读0次

    Tags: DeepLearning Tool


    Keras介绍

    keras是一个深度学习的high level API, 由python实现,支持Tesorflow, Theano, CNTK作为后端(也称为backend), 最近keras也成为Tensorflow的官方high level API, 因此和Tensorflow的适配性更好。keras支持简洁快速的原型设计, 支持CNN和RNN, 无缝CPU/GPU切换。此外keras模型也能直接转换为CoreML模型用在IOS设备上。

    如果你熟悉深度学习基础概念, Keras很容易上手进行快速复现,因为不需要自己实现很多layer, 门槛比较低。 适合用于快速训练模型部署到生产环境,研究的话还是Tensorflow和caffe2+Pytorch吧,但是Tensorflow感觉实在是太复杂了, 接口太冗余。

    Sequential Model

    序列模型是多个网络层的线性堆叠, 可以传入layer list或者调用add()方法将layer加入模型。

    from keras.models import Sequential
    from keras.layers import Dense, Activation
    # 传list
    model1 = Sequential([Dense(32, input_dim=784), Activation('relu'), Dense(10), Activation('softmax')])
    # add() 方法
    model2 = Sequential()
    model2.add(Dense(32, input_shape=(784,)))
    model2.add(Activation('relu'))
    model2.add(Dense(10))
    model2.add(Activation('softmax'))
    

    定义好模型之后需要调用compile方法对模型进行配置, 此时传入三个参数: optimizer, loss, metrics(如accuracy), loss和metrics都可以自定义。

    # For a multi-class classification problem
    model.compile(optimizer='rmsprop',
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
    
    # For a binary classification problem
    model.compile(optimizer='rmsprop',
                  loss='binary_crossentropy',
                  metrics=['accuracy'])
    
    # For a mean squared error regression problem
    model.compile(optimizer='rmsprop',
                  loss='mse')
    
    # For custom metrics
    import keras.backend as K
    
    def mean_pred(y_true, y_pred):
        return K.mean(y_pred)
    
    model.compile(optimizer='rmsprop',
                  loss='binary_crossentropy',
                  metrics=['accuracy', mean_pred])
    

    定义好模型和配置之后就可以使用fit()和fit_generator()方法传入进行训练。

    model.fit(x_train, y_train,
              epochs=20,
              batch_size=128)
    

    训练完了之后就可以调用evaluate()方法对训练好的模型进行评估。

    score = model.evaluate(x_test, y_test, batch_size=128)
    

    感觉真的是很简单:) user-friendly, made for human, 不像Tensorflow...

    下面看一个Sequential Model的Mnist手写字符分类(深度学习hello world)例子吧。
    keras内部有一些数据集接口, 包括 CIFAR10图片分类数据,CIFAR-100图片数据, IMDB电影评论分类, 路透社新闻主题分类, MNIST手写字体识别, Fashion-MNIST, 波士顿房价数据(回归模型)。可以用来练习或者测试自己的model。
    感知机模型

     from keras import layers
     from keras import models
     from keras.datasets import mnist
     from keras.utils import to_categorical # convert int labels to one-hot vector
     
     # define model
     model = models.Sequential()
     model.add(layers.Dense(128, activation='relu', input_dim=784))
     model.add(layers.Dropout(0.5))
     model.add(layers.Dense(64, activation='relu'))
     model.add(layers.Dropout(0.5))
     model.add(layers.Dense(10, activation='softmax'))
     
     # print the model
     model.summary
     
     # load data
     (train_images, train_labels), (test_images, test_labels) = mnist.load_data()
     train_images = train_images.astype('float32')/255 # normalize to 0~1
     test_images = test_images.astype('float32')/255
     train_images = train_images.reshape((60000,-1))
     test_images = test_images.reshape((10000,-1))
     
     
     # convert to one-hot vectors
     train_labels = to_categorical(train_labels)
     test_labels = to_categorical(test_labels)
     
     # define training config
     model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
     
     # train the model
     model.fit(train_images, train_labels, epochs=5, batch_size=64)
     
     # evaluate the model
     test_loss, test_accuracy = model.evaluate(test_images, test_labels)
     print("test loss:", test_loss)
     print("test accuracy:", test_accuracy)
     
    # 输出
    60000/60000 [==============================] - 2s 37us/step - loss: 0.6188 - acc: 0.8094
    Epoch 2/5
    60000/60000 [==============================] - 2s 32us/step - loss: 0.3359 - acc: 0.9093
    Epoch 3/5
    60000/60000 [==============================] - 2s 32us/step - loss: 0.2908 - acc: 0.9231
    Epoch 4/5
    60000/60000 [==============================] - 2s 32us/step - loss: 0.2699 - acc: 0.9317
    Epoch 5/5
    60000/60000 [==============================] - 2s 32us/step - loss: 0.2650 - acc: 0.9347
    10000/10000 [==============================] - 0s 23us/step
    test loss: 0.16157680716912728
    test accuracy: 0.9622
    

    卷积模型

     from keras.datasets import mnist
     from keras.utils import to_categorical # convert int labels to one-hot vector
     
     # define model
     model = models.Sequential()
     model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
     model.add(layers.MaxPooling2D((2, 2)))
     model.add(layers.Conv2D(64, (3, 3), activation='relu'))
     model.add(layers.MaxPooling2D((2, 2)))
     model.add(layers.Conv2D(64, (3, 3), activation='relu'))
     model.add(layers.Flatten())
     model.add(layers.Dense(64, activation='relu'))
     model.add(layers.Dense(10, activation='softmax'))
     
     # print the model
     print(model.summary)
     
     # load data
     (train_images, train_labels), (test_images, test_labels) = mnist.load_data()
     train_images = train_images.reshape((60000, 28, 28, 1))
     train_images = train_images.astype('float32')/255 # normalize to 0~1
     
     test_images = test_images.reshape((10000, 28, 28, 1))
     test_images = test_images.astype('float32')/255
     
     # convert to one-hot vectors
     train_labels = to_categorical(train_labels)
     test_labels = to_categorical(test_labels)
     
     # define training config
     model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
     
     # train the model
     model.fit(train_images, train_labels, epochs=5, batch_size=64)
     
     # evaluate the model
     test_loss, test_accuracy = model.evaluate(test_images, test_labels)
     print("test loss:", test_loss)
     print("test accuracy:", test_accuracy)
    
    # 输出
    60000/60000 [==============================] - 34s 565us/step - loss: 0.1739 - acc: 0.9468
    Epoch 2/5
    60000/60000 [==============================] - 39s 652us/step - loss: 0.0457 - acc: 0.9859
    Epoch 3/5
    60000/60000 [==============================] - 36s 598us/step - loss: 0.0307 - acc: 0.9906
    Epoch 4/5
    60000/60000 [==============================] - 37s 614us/step - loss: 0.0239 - acc: 0.9930
    Epoch 5/5
    60000/60000 [==============================] - 35s 590us/step - loss: 0.0193 - acc: 0.9941
    10000/10000 [==============================] - 2s 189us/step
    test loss: 0.028174066650505848
    test accuracy: 0.9913
    

    相关文章

      网友评论

          本文标题:【Tool】Keras 基础学习 I Sequential M

          本文链接:https://www.haomeiwen.com/subject/ufylgftx.html