美文网首页
keras使用TCN

keras使用TCN

作者: 术枚派 | 来源:发表于2021-07-05 22:15 被阅读0次

keras提供了TCN的实现。使用方法github

参数

需要学习TCN层的一些参数。示例:

TCN(
    nb_filters=64,
    kernel_size=3,
    nb_stacks=1,
    dilations=(1, 2, 4, 8, 16, 32),
    padding='causal',
    use_skip_connections=True,
    dropout_rate=0.0,
    return_sequences=False,
    activation='relu',
    kernel_initializer='he_normal',
    use_batch_norm=False,
    use_layer_norm=False,
    use_weight_norm=False,
    **kwargs
)
  • nb_filters: Integer. The number of filters to use in the convolutional layers. Would be similar to units for LSTM. Can be a list. 就是卷积层中卷积核的数目。
  • kernel_size: Integer. The size of the kernel to use in each convolutional layer.
  • dilations: List/Tuple. A dilation list. Example is: [1, 2, 4, 8, 16, 32, 64].
  • nb_stacks: Integer. The number of stacks of residual blocks to use.
  • padding: String. The padding to use in the convolutions. 'causal' for a causal network (as in the original implementation) and 'same' for a non-causal network.
  • use_skip_connections: Boolean. If we want to add skip connections from input to each residual block.
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • dropout_rate: Float between 0 and 1. Fraction of the input units to drop.
  • activation: The activation used in the residual blocks o = activation(x + F(x)).
  • kernel_initializer: Initializer for the kernel weights matrix (Conv1D).
  • use_batch_norm: Whether to use batch normalization in the residual layers or not.
  • use_layer_norm: Whether to use layer normalization in the residual layers or not.
  • use_weight_norm: Whether to use weight normalization in the residual layers or not.
  • kwargs: Any other set of arguments for configuring the parent class Layer. For example "name=str", Name of the model. Use unique names when using multiple TCN.

感受野

感受野与TCN的结构有着重要的关系。

没有因果卷积的TCN

相关文章

网友评论

      本文标题:keras使用TCN

      本文链接:https://www.haomeiwen.com/subject/iktwultx.html