美文网首页
senet论文笔记

senet论文笔记

作者: 2018燮2021 | 来源:发表于2019-02-15 12:01 被阅读0次

    参考文章:

    解读Squeeze-and-Excitation Networks(SENet)
    SENet学习笔记
    Squeeze-and-Excitation Networks论文翻译——中英文对照

    Abstract—The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at minimal additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ∼25%.

    关键词 spatial,channel-wise,receptive fields

    • The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer.

    关键词 the spatial component(spatial encodings),representation,feature hierarchy

    • A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy.

    关键词 channel relationship,adaptively recalibrates,interdependencies between channels

    • In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels.

    INTRODUCTION

    1、 At each convolutional layer in the network, a collection of filters expresses neighbourhood spatial connectivity patterns along input channels—fusing spatial and channel-wise information together within local receptive fields. By interleaving a series of convolutional layers with non-linear activation functions and downsampling operators, CNNs are able to produce robust representations that capture hierarchical patterns and attain global theoretical receptive fields.

    2、Recent research has shown that these representations can be strengthened by integrating learning mechanisms into the network that help capture spatial correlations between features. One such approach, popularised by the Inception family of architectures [5], [6], incorporates multi-scale processes into network modules to achieve improved performance.

    3、we propose a mechanism that allows the network to perform feature recalibration, through which it can learn to use global information to selectively emphasise informative features and suppress less useful ones.

    The structure of the SE building block

    关键词 feature recalibration,squeeze operation(channel descriptor),aggregation,excitation operation,a simple self-gating mechanism

    1、The features U are first passed through a squeeze operation, which produces a channel descriptor by aggregating feature maps across their spatial dimensions (H × W). The function of this descriptor is to produce an embedding of the global distribution of channel-wise feature responses, allowing information from the global receptive field of the network to be used by all its layers.

    2、The aggregation is followed by an excitation operation, which takes the form of a simple self-gating mechanism that takes the embedding as input and produces a collection of per-channel modulation weights. These weights are applied to the feature maps U to generate the output of the SE block which can be fed directly into subsequent layers of the network.

    关键词 class-agnostic manner,class-specific manner

    • It is possible to construct an SE network (SENet) by simply stacking a collection of SE blocks. Moreover, these SE blocks can also be used as a drop-in replacement for the original block at a range of depths in the network architecture (Sec. 6.4).
      SE网络可以通过简单地堆叠SE构建块的集合来生成。SE块也可以用作架构中任意深度的原始块的直接替换。

    在前面的层中,它学习以类不可知的方式激发信息特征,增强共享的较低层表示的质量。在后面的层中,SE块越来越专业化,并以高度类特定的方式响应不同的输入。因此,SE块进行特征重新校准的好处可以通过整个网络进行累积。

    • While the template for the building block is generic, the role it performs at different depths differs throughout the network.
    • In earlier layers, it excites informative features in a class-agnostic manner, strengthening the shared low-level representations.
    • In later layers, the SE blocks become increasingly specialised, and respond to different inputs in a highly class-specific manner (Sec. 7.2).
    • As a consequence, the benefits of the feature recalibration performed by SE blocks can be accumulated through the network.

    Deeper architectures

    关键词 learning ,representation

    • VGGNets [11] and Inception models [5] showed that increasing the depth of a network could significantly increase the quality of representations that it was capable of learning.
    • By regulating the distribution of the inputs to each layer, Batch Normalization (BN) [6] added stability to the learning process in deep networks and produced smoother optimisation surfaces [12].
    • Building on these works, ResNets demonstrated that it was possible to learn considerably deeper and stronger networks through the use of identity-based skip connections [13], [14].

    An alternative, but closely related line of research has focused on methods to improve the functional form of the computational elements contained within a network.

    • Grouped convolutions have proven to be a popular approach for increasing the cardinality of learned transforma- tions [18], [19].
    • More flexible compositions of operators can be achieved with multi-branch convolutions [5], [6], [20], [21], which can be viewed as a natural extension of the grouping operator.

    In prior work, cross-channel correlations are typically mapped as new combinations of features, either independently of spatial structure [22], [23] or jointly by using standard convolutional filters [24] with 1 × 1 convolutions.
    多分支卷积可以解释为这个概念的概括,使得卷积算子可以更灵活的组合[14,38,39,40]。跨通道相关性通常被映射为新的特征组合,或者独立的空间结构[6,18],或者联合使用标准卷积滤波器[22]和1×1卷积

    • Much of this research has concentrated on the objective of reducing model and computational complexity, reflecting an assumption that channel relationships can be formulated as a composition of instance-agnostic functions with local receptive fields.
      然而大部分工作的目标是集中在减少模型和计算复杂度上面。这种方法反映了一个假设,即通道关系可以被表述为具有局部感受野的实例不可知的函数的组合。

    • In contrast, we claim that providing the unit with a mechanism to explicitly model dynamic, non-linear dependencies between channels using global information can ease the learning process, and significantly enhance the representational power of the network.
      相比之下,我们声称为网络提供一种机制来显式建模通道之间的动态、非线性依赖关系,使用全局信息可以减轻学习过程,并且显著增强网络的表示能力。

    相关文章

      网友评论

          本文标题:senet论文笔记

          本文链接:https://www.haomeiwen.com/subject/pyyyeqtx.html