一维CNN

作者: 文均 | 来源:发表于2019-02-28 16:26 被阅读0次

    参考链接:https://blog.goodaudience.com/introduction-to-1d-convolutional-neural-networks-in-keras-for-time-sequences-3a7ff801a2cf

    何时使用 1D-CNN

    • 从短(固定长度)片段内提取特征
    • 片段内特征位置没有相关性

    A 1D CNN is very effective when you expect to derive interesting
    features from shorter (fixed-length) segments of the overall data set
    and where the location of the feature within the segment is not of
    high relevance.

    适用数据: 传感器时序数据

    1D-CNN 与 2D-CNN 的区别

    • 输入数据的维度不同
    • 卷积遍历数据的方式不同
    1d-cnn-vs-2d-cnn.png

    应用:行为识别

    • 加速计数据:x, y, z 三轴
    • 数据类别:走、慢跑、站立等
    data.png

    构造 1D-CNN

    cnn.png

    Keras 构造网络:

    model_m = Sequential()
    model_m.add(Reshape((TIME_PERIODS, num_sensors), input_shape=(input_shape,)))
    model_m.add(Conv1D(100, 10, activation='relu', input_shape=(TIME_PERIODS, num_sensors)))
    model_m.add(Conv1D(100, 10, activation='relu'))
    model_m.add(MaxPooling1D(3))
    model_m.add(Conv1D(160, 10, activation='relu'))
    model_m.add(Conv1D(160, 10, activation='relu'))
    model_m.add(GlobalAveragePooling1D())
    model_m.add(Dropout(0.5))
    model_m.add(Dense(num_classes, activation='softmax'))
    print(model_m.summary())
    

    网络结构

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    reshape_45 (Reshape)         (None, 80, 3)             0         
    _________________________________________________________________
    conv1d_145 (Conv1D)          (None, 71, 100)           3100      
    _________________________________________________________________
    conv1d_146 (Conv1D)          (None, 62, 100)           100100    
    _________________________________________________________________
    max_pooling1d_39 (MaxPooling (None, 20, 100)           0         
    _________________________________________________________________
    conv1d_147 (Conv1D)          (None, 11, 160)           160160    
    _________________________________________________________________
    conv1d_148 (Conv1D)          (None, 2, 160)            256160    
    _________________________________________________________________
    global_average_pooling1d_29  (None, 160)               0         
    _________________________________________________________________
    dropout_29 (Dropout)         (None, 160)               0         
    _________________________________________________________________
    dense_29 (Dense)             (None, 6)                 966       
    =================================================================
    Total params: 520,486
    Trainable params: 520,486
    Non-trainable params: 0
    _________________________________________________________________
    None
    
    • 输入数据:
      每个样本为3轴数据,每轴包含4秒数据,采样频率为20Hz,即每轴有80个数值,共240个数值
    • Reshape层: 将一个样本的240个数值转为 80×3 矩阵
    • 第一个Conv1D层:
      • 卷积核长度为10,深度(通道数)为3,步长为1,卷完后数据由 80×3 变为 71×1
      • 共100个卷积核,输出 71×100
      • 参数个数:每个卷积核 10×3+1=31 个参数,100个卷积核共 3100 个参数
    • 第二个Conv1D层:
      • 卷积核长度为10,深度(通道数)为100,步长为1,卷完后数据由 71×100 变为 62×1
      • 共100个卷积核,输出 62×100
      • 参数个数:每个卷积核 10×100+1=101 个参数,100个卷积核共 10100 个参数
    • MaxPooling层:
      • 池化核长度为3,步长为3, 池化后数据长度为 (62-3) / 3 + 1 = 20

        [1,2,3] [4,5,6] [ ... ] [58,59,60] [61,62
           ↓       ↓                ↓
          max1    max2    ...      max20
        
      • 数据深度(通道数)为100,输出 20×100

    • 第三个Conv1D层:
      • 卷积核长度为10,深度为100,步长为1,卷积核个数为160,输出为
        (20-10+1)×160=11×160
      • 参数个数:(10×100+1)×160=160160
    • 第四个Conv1D层:
      • 卷积核长度为10,深度为160,步长为1,卷积核个数为160,输出为
        (11-10+1)×160=2×160
      • 参数个数:(10×160+1)×160=256160
    • GlobalAveragePooling1D层:
      • 每个通道取均值,共160个通道,输出 160 个数值
    • Dropout层
    • Dense层(输出层):
      • 输入为160个数值,输出为6个数值,激活函数为 softmax
      • 参数个数:(160+1)×6=966

    训练

    callbacks_list = [
        keras.callbacks.ModelCheckpoint(
            filepath='best_model.{epoch:02d}-{val_loss:.2f}.h5',
            monitor='val_loss', save_best_only=True),
        keras.callbacks.EarlyStopping(monitor='acc', patience=1)
    ]
    
    model_m.compile(loss='categorical_crossentropy',
                    optimizer='adam', metrics=['accuracy'])
    
    BATCH_SIZE = 400
    EPOCHS = 50
    
    history = model_m.fit(x_train,
                          y_train,
                          batch_size=BATCH_SIZE,
                          epochs=EPOCHS,
                          callbacks=callbacks_list,
                          validation_split=0.2,
    verbose=1)
    

    相关文章

      网友评论

          本文标题:一维CNN

          本文链接:https://www.haomeiwen.com/subject/frhkuqtx.html