美文网首页
PyTorch Convolution Layers

PyTorch Convolution Layers

作者: DejavuMoments | 来源:发表于2019-05-11 23:09 被阅读0次

    1 \cdot 1 卷积核的作用是什么?

    1x1 卷积核在 Network in Network 中被提出了,主要作用有:
    1.压缩/提升 维度
    2.相当于全联接网络,经过 ReLU 层,可以增加非线性

    为什么要进行 Padding 操作?

    1.解决多次卷积之后,Feature Map 尺寸缩小的问题
    2.边缘信息丢失(卷积核 扫)

    Padding 一般有两种选择:Valid 和 Same

    ResNet

    Inception

    Conv1d

    torch.nn.Conv1d(
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        padding=0,
        dilation=1,
        groups=1,
        bias=True,
        padding_mode='zeros'
    )
    

    Applies a 1D convolution over an input signal composed of several input planes.

    In the simplest case, the output value of the layer with input size

    where ⋆ is the valid cross-correlation operator, N is a batch size, C denotes a number of channels, L is a length of signal sequence.

    这里 32 为batch_size,50 为句子最大长度,256 为词向量

    再输入一维卷积的时候,需要将 32*50*256 变换为 32*256*50,因为一维卷积是在最后维度上扫的,最后 out 的大小即为: 32*100*(35-2+1)=32*100*34

    kernel_size

    stride 步长

    Conv2d

    torch.nn.Conv2d(
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        padding=0,
        dilation=1,
        groups=1,
        bias=True,
        padding_mode='zeros'
    )
    

    Applies a 2D convolution over an input signal composed of several input planes.

    AdaptiveMaxPool1d

    Applies a 1D adaptive max pooling over an input signal composed of several input planes.

    相关文章

      网友评论

          本文标题:PyTorch Convolution Layers

          本文链接:https://www.haomeiwen.com/subject/oecfaqtx.html