美文网首页
shuffleNet v2笔记

shuffleNet v2笔记

作者: 寒夏凉秋 | 来源:发表于2020-02-14 16:34 被阅读0次

    作者从内存访问代价(Memory Access Cost,MAC)和GPU并行性的方向分析了网络应该怎么设计才能进一步减少运行时间,直接的提高模型的效率

    设计原则

    • Equal channel width minimizes memory access cost (MAC)
    • Excessive group convolution increases MAC
    • Network fragmentation reduces degree of parallelism
    • Element-wise operations are non-negligible

    Equal channel width minimizes memory access cost

    当输入通道和输出通道数量相同的时候,mac最小.

    假如一个卷积操作的输入特征图尺寸为:C_{1} * W * H,输出特征图大小为:C_{2} * W * H . 则该卷积操作的FLOPS为C_{1}*C_{2} * W * H.
    在这个计算过程中,输入特征图占用内存为:C_{1} * W * H,输出特征图占用内存为:C_{2} * W * H,卷积核占用内存为:C_{1} * C_{2}

    则总计内存访问为:

    B=H*W *C_{1} *C_{2},则得:
    \begin{aligned} MAC&= H * W *(C_{1}+C_{2})+C_{1}*C_{2}\\ &=\sqrt{(H * W)^{2} * (C_{1}+ C_{2})^2} + \frac {B}{H * W}\\ &\geq \sqrt{(H * W)^{2} * 4 * C_{1}*C_{2}}+\frac {B}{H * W}\\ &=\geq 2 * \sqrt{H * W *(H * W *C_{1}*C_{2})} + \frac {B}{H * W}\\ &=\geq 2* \sqrt{H * W * B} + \frac {B}{H * W } \end{aligned}

    因为:

    (C_{1}+C_{2})^{2} \geq 4 * C_{1} *C_{2}
    C_{1}^{2}+C_{2}^{2} \geq 2 * C_{1} *C_{2}

    当C1 等于C2的时候,mac最小

    Excessive group convolution increases MAC.

    group convolution,分组卷积,通过将所有通道之间的密集卷积改为稀疏(仅在通道组内),可以显著降低计算复杂度.
    FLOPs=H*W*\frac{C_{1}}{g}*\frac{C_{2}}{g} * g

    其MAC计算公式为:

    \begin{aligned} MAC &= H * W *(C_{1}+ C{2})+\frac{C_{1}*C_{2}}{g}\\ &=B * g *(\frac{1}{C_{1}}+\frac{1}{C_{2}}+\frac{B}{H * W}) \end{aligned}


    B=\frac{H * W * C_{1} * C_{2}}{g}

    所以当分组数g增大时,内存访问增大;而论文中也用一组实验证明了该观点:


    image

    显然,使用较大的组数会大大降低运行速度。 例如,在GPU上使用8组的速度比在1组(标准密集卷积)上的速度慢两倍以上,而在ARM上则要慢30%。 这主要是由于MAC增加。因此要谨慎地设计变量g;

    Network fragmentation reduces degree of parallelism

    “multi- path” structure,多路径网络(如googlenet中的 四个分支)使用了许多小型操作符,尽管这种零散的结构已经显示了对准确性的提高,但是它对于GPU并行计算不太友好,因此可能会降低效率.作者设计了以下不同网络来计算其效率:


    image

    以下结果证明并行操作会降低运行速度,如 e的结果比c慢了三倍;


    image

    Element-wise operations are non-negligible

    element-wise 操作非常耗时.

    在计算FlOPs时,我们通常只考虑卷积中的乘法操作,但是一些逐个元素(element-wise)的操作会占用相当长的时间,尤其是在GPU.它们的FLOP较小,但MAC相对较重。各个操作占时如下图所示:


    image

    guide conclusion

    在设计高性能网络时,尽量做到:

    • 使用输入通道和输出通道相同的卷积操作
    • 谨慎使用分组卷积
    • 减少网络并行分支
    • 减少元素级(element-wise)操作

    网络基础结构

    作者在论文中提到,回顾于shuffleNet v1,其改进经验应该为:

    Therefore, in order to achieve high model capacity and efficiency, the key issue is how to maintain a large number and equally wide channels with neither dense convolution nor too many groups

    如何在既不密集卷积(使用分组卷积)又不分太多组的情况下保持大量且同样宽的通道数量.

    Channel split

    如下图所示:
    在每个单元的开头,将c个特征通道输入分为两个分支cc^{`},依据之前的网络设计原则G3(不宜有太多并行分支),其中一个分支将作为高速残差通道直接与另一分支结果直接concat;另一个分支中,由三个具有相同的输入通道数和输出通道数的卷积组成(以满足设计原则G1).两个1 * 1的卷积不再按组进行,原因:channel split 已经产生了两个组.concat后按照shuffleNet v1的方式,将两个分支的通道内容进行"channel shuffle"的操作来启用两个分支之间的信息通信.
    像ReLU和深度卷积之类的元素操作仅存在于一个分支中。同样,三个连续的元素方式操作(“ Concat”,“ Channel Shuffle”和“ Channel Split”)合并为单个元素方式操作。

    Channel split
    import torch
    import torch.nn as nn
    
    def channel_shuffle(x, groups):
        # type: (torch.Tensor, int) -> torch.Tensor
        batchsize, num_channels, height, width = x.data.size()
        channels_per_group = num_channels // groups
    
        # reshape
        x = x.view(batchsize, groups,
                   channels_per_group, height, width)
    
        x = torch.transpose(x, 1, 2).contiguous()
    
        # flatten
        x = x.view(batchsize, -1, height, width)
    
        return x
    
    class InvertedResidual(nn.Module):
        def __init__(self,inp,oup,stride):
            super(InvertedResidual,self).__init__()
            branch_features = oup // 2
            self.stride = stride
    
            self.branch_1 = nn.Sequential()
    
            self.branch_2 = nn.Sequential(
                nn.Conv2d(inp if (self.stride > 1) else branch_features,
                          branch_features,kernel_size=1,stride=1,padding=0,bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True),
                ##depth_wise_conv
                nn.Conv2d(branch_features,branch_features,kernel_size=3,stride=stride,padding=0,bias=False,groups=branch_features),
                nn.BatchNorm2d(branch_features),
                nn.Conv2d(branch_features,branch_features,kernel_size=1,stride=1,padding=0,bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True),
            )
    
        def forward(self, x):
            out = torch.cat((self.branch_1(x),self.branch_2(x)),dim=1)
            out = channel_shuffle(out,2)
            return out
    

    down sample

    channel split 用来保持图片分辨率,而更多情况下,我们需要对图片进行采样信息抽取,所以stride=2,改为如下结构:通道拆分运算符已删除。因此,输出通道的数量增加了一倍.branch_1 不再是直接高速通道直接连接,而是变成一个3 * 3 的深度分离卷积和一个1 * 1的卷积进行信息抽取.

    image
    #所以我们修改branch_1的定义即可
    self.branch_1 = nn.Sequential(
                nn.Conv2d(inp,inp,kernel_size=3,stride=2,padding=1,groups=inp),
                nn.BatchNorm2d(inp),
                nn.Conv2d(inp,branch_features,kernel_size=1,stride=1,padding=0,bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True)
            )
    

    在shuffle net中,按照bottlenet的通道数按照比例缩放以生成复杂度不同的网络.

    shuffle net v2整体结构

    按照论文中提供网络结构图,


    image

    我们来用pytorch代码搭建shufflenet v2;

    首先是改写bottlenet结构,按照stride是否为2,来判断另一个分支上是否需要用33深度分离卷积跟11卷积进行信息抽取;

    class InvertedResidual(nn.Module):
        def __init__(self, inp, oup, stride):
            super(InvertedResidual, self).__init__()
    
            if not (1 <= stride <= 3):
                raise ValueError('illegal stride value')
            self.stride = stride
    
            branch_features = oup // 2
            assert (self.stride != 1) or (inp == branch_features << 1)
    
            if self.stride > 1:
                self.branch1 = nn.Sequential(
                    self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1),
                    nn.BatchNorm2d(inp),
                    nn.Conv2d(inp, branch_features, kernel_size=1, stride=1, padding=0, bias=False),
                    nn.BatchNorm2d(branch_features),
                    nn.ReLU(inplace=True),
                )
            else:
                self.branch1 = nn.Sequential()
    
            self.branch2 = nn.Sequential(
                nn.Conv2d(inp if (self.stride > 1) else branch_features,
                          branch_features, kernel_size=1, stride=1, padding=0, bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True),
                self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1),
                nn.BatchNorm2d(branch_features),
                nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True),
            )
    
        @staticmethod
        def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias=False):
            return nn.Conv2d(i, o, kernel_size, stride, padding, bias=bias, groups=i)
    
        def forward(self, x):
            if self.stride == 1:
                x1, x2 = x.chunk(2, dim=1)
                out = torch.cat((x1, self.branch2(x2)), dim=1)
            else:
                out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)
    
            out = channel_shuffle(out, 2)
    
            return out
    

    然后我们按照stage不同,搭建起整体shufflenet 用于imagenet 1000分类的整体结构:

    class ShuffleNetV2(nn.Module):
        def __init__(self, stages_repeats, stages_out_channels, num_classes=1000, inverted_residual=InvertedResidual):
            super(ShuffleNetV2, self).__init__()
    
            if len(stages_repeats) != 3:
                raise ValueError('expected stages_repeats as list of 3 positive ints')
            if len(stages_out_channels) != 5:
                raise ValueError('expected stages_out_channels as list of 5 positive ints')
            self._stage_out_channels = stages_out_channels
    
            input_channels = 3
            output_channels = self._stage_out_channels[0]
            self.conv1 = nn.Sequential(
                nn.Conv2d(input_channels, output_channels, 3, 2, 1, bias=False),
                nn.BatchNorm2d(output_channels),
                nn.ReLU(inplace=True),
            )
            input_channels = output_channels
    
            self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    
            stage_names = ['stage{}'.format(i) for i in [2, 3, 4]]
            for name, repeats, output_channels in zip(
                    stage_names, stages_repeats, self._stage_out_channels[1:]):
                seq = [inverted_residual(input_channels, output_channels, 2)]
                for i in range(repeats - 1):
                    seq.append(inverted_residual(output_channels, output_channels, 1))
                setattr(self, name, nn.Sequential(*seq))
                input_channels = output_channels
    
            output_channels = self._stage_out_channels[-1]
            self.conv5 = nn.Sequential(
                nn.Conv2d(input_channels, output_channels, 1, 1, 0, bias=False),
                nn.BatchNorm2d(output_channels),
                nn.ReLU(inplace=True),
            )
    
            self.fc = nn.Linear(output_channels, num_classes)
    
        def _forward_impl(self, x):
            # See note [TorchScript super()]
            x = self.conv1(x)
            x = self.maxpool(x)
            x = self.stage2(x)
            x = self.stage3(x)
            x = self.stage4(x)
            x = self.conv5(x)
            x = x.mean([2, 3])  # globalpool
            x = self.fc(x)
            return x
    
        def forward(self, x):
            return self._forward_impl(x)
    
    
    def _shufflenetv2(arch, pretrained, progress, *args, **kwargs):
        model = ShuffleNetV2(*args, **kwargs)
    
        if pretrained:
            model_url = model_urls[arch]
            if model_url is None:
                raise NotImplementedError('pretrained {} is not supported as of now'.format(arch))
            else:
                state_dict = load_state_dict_from_url(model_url, progress=progress)
                model.load_state_dict(state_dict)
    
        return model
    
    

    如果按照通道数不同来进行对网络的缩放,则:
    0.5:

    return _shufflenetv2('shufflenetv2_x0.5', pretrained, progress,
                             [4, 8, 4], [24, 48, 96, 192, 1024], **kwargs)
    

    1:

    return _shufflenetv2('shufflenetv2_x1.0', pretrained, progress,
                             [4, 8, 4], [24, 116, 232, 464, 1024], **kwargs)
    
    

    总结与分析

    shuffleNet v2效率高且准确的原因主要有两个:

    • 每个构建模块(building block)中高效地利用了feature channel和network capacity;
    • 每个模块中,一半的特征通道信息直接参与下一个block中(高速残差信息通道).被看做是另一种方式的特征复用.

    在densenet中,分析得到相邻层之间的连接要强于其他层之间的信息连接,也就是意味着所有层之间的密集连接可能带来冗余,特征重用量随两个块之间的距离呈指数衰减。在远距离的块之间,功能重用变得很弱。shuffleNet 用 channel split的方式进行特征复用.

    Reference

    相关文章

      网友评论

          本文标题:shuffleNet v2笔记

          本文链接:https://www.haomeiwen.com/subject/sepqfhtx.html