论文地址:https://arxiv.org/abs/1905.11946
pytorch代码地址:https://github.com/lukemelas/EfficientNet-PyTorch
论文主要思想
该论文主要讲述模型缩放对模型性能的影响。通过对网络深度(层数)、宽度(通道数)以及输入图片的分辨率的调整来获取更佳的网络模型。最终获得的模型相比原来的模型不仅参数量少,而且准确率高。论文主要是依靠下图进行论述
![](https://img.haomeiwen.com/i18577060/76f3b29e4469d820.png)
传统的模型缩放方法一般只调整一个变量,例如:ResNet通过对网络层数的缩放,获得ResNet-18和ResNet-200。MobileNets通过对通道数的缩放来调整模型。也有通过增大输入图片的大小来提取模型的准确率。
本文方法为在现有baseline网络下同时调整层数、通道数和分辨率三个变量来缩放模型,以获得更好的准确率和运行速度。因此baseline网络的模型设计也非常重要,文中的EfficientNet-B0模型为下图:
![](https://img.haomeiwen.com/i18577060/605f4f971f316835.png)
后续工作便为调整d,w,r三个参数以达到最佳状态,对于EfficientNet-B0网络,将φ 设1,最后联合搜索出的最佳参数为a=1.2,β=1.1,γ=1.15。调整的公式为
![](https://img.haomeiwen.com/i18577060/c58d43e82a799460.png)
EfficientNet-B0网络模型细节(代码)
整体模型中的特征提取过程主要由stem,blocks和head构成,最后将获得的feature map用全连接层做类型判别。
EfficientNet(
(_conv_stem): Conv2dStaticSamePadding(
3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn0): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_blocks): ModuleList(
(0): MBConvBlock(
(_depthwise_conv): Conv2dStaticSamePadding(
32, 32, kernel_size=(3, 3), stride=[1, 1], groups=32, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
32, 8, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
8, 32, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(16, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(1): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(96, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
96, 96, kernel_size=(3, 3), stride=[2, 2], groups=96, bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn1): BatchNorm2d(96, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
96, 4, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
4, 96, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(24, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(2): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
144, 144, kernel_size=(3, 3), stride=(1, 1), groups=144, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
144, 6, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
6, 144, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(24, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(3): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
144, 144, kernel_size=(5, 5), stride=[2, 2], groups=144, bias=False
(static_padding): ZeroPad2d(padding=(1, 2, 1, 2), value=0.0)
)
(_bn1): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
144, 6, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
6, 144, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
144, 40, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(40, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(4): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
240, 240, kernel_size=(5, 5), stride=(1, 1), groups=240, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
240, 10, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
10, 240, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
240, 40, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(40, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(5): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
240, 240, kernel_size=(3, 3), stride=[2, 2], groups=240, bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn1): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
240, 10, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
10, 240, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(80, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(6): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
480, 480, kernel_size=(3, 3), stride=(1, 1), groups=480, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
480, 20, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
20, 480, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(80, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(7): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
480, 480, kernel_size=(3, 3), stride=(1, 1), groups=480, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
480, 20, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
20, 480, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(80, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(8): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
480, 480, kernel_size=(5, 5), stride=[1, 1], groups=480, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
480, 20, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
20, 480, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(112, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(9): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
672, 672, kernel_size=(5, 5), stride=(1, 1), groups=672, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
672, 28, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
28, 672, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(112, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(10): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
672, 672, kernel_size=(5, 5), stride=(1, 1), groups=672, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
672, 28, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
28, 672, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(112, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(11): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
672, 672, kernel_size=(5, 5), stride=[2, 2], groups=672, bias=False
(static_padding): ZeroPad2d(padding=(1, 2, 1, 2), value=0.0)
)
(_bn1): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
672, 28, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
28, 672, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
672, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(12): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(13): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(14): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(15): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(3, 3), stride=[1, 1], groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 320, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(320, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
)
(_conv_head): Conv2dStaticSamePadding(
320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn1): BatchNorm2d(1280, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_avg_pooling): AdaptiveAvgPool2d(output_size=1)
(_dropout): Dropout(p=0.2, inplace=False)
(_fc): Linear(in_features=1280, out_features=1000, bias=True)
(_swish): MemoryEfficientSwish()
)
以224x224的一张彩色图片为例,其维度为[1,3,224,224]。
stem的前向传播,其中swish在之后的运算中大量使用,命名字面意思说能够有效利用内存,但还不能理解其中原理。
(_conv_stem): Conv2dStaticSamePadding(
3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn0): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
维度变化:[1,3,224,224]--pad-->[1,3,225,225]--conv-->[1,32,112,112]--bn0-->[1,32,112,112]--swish-->[1,32,112,112]
blocks的前向传播,以其中第一个MBConvBlock为例:
(0): MBConvBlock(
(_depthwise_conv): Conv2dStaticSamePadding(
32, 32, kernel_size=(3, 3), stride=[1, 1], groups=32, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
32, 8, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
8, 32, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(16, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
![](https://img.haomeiwen.com/i18577060/6699e677386d5994.png)
其中depthwise深度卷积的讲解可以参考该篇博客https://www.jianshu.com/p/38dc74d12fcf?utm_source=oschina-app
经过blocks层的特征提取,可以得到[1,320,7,7]的feature map。
head层设计
(_conv_head): Conv2dStaticSamePadding(
320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn1): BatchNorm2d(1280, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
前向传播维度变化:[1,320,7,7]--head-->[1,1280,7,7]--bn1-->[1,1280,7,7]--swish-->[1,1280,7,7]
池化层及最后的全连接层
(_avg_pooling): AdaptiveAvgPool2d(output_size=1)
[1,1280]--dropout-->[1,1280]--fc-->[1,1000]
(_dropout): Dropout(p=0.2, inplace=False)
(_fc): Linear(in_features=1280, out_features=1000, bias=True)
经过conv_head后的维度[1,1280,7,7]--avg_pool-->[1,1280,1,1]--view-->[1,1280]--dropout-->[1,1280]--fc-->[1,1000],一张图片的最终输出为[1,1000],因为预测类别数目为1000。至此网络的前向传播过程已经讲完。
网络中存在dropout和drop connect两种对于过拟合的解决方法,两者的区别可以参考该博客https://www.cnblogs.com/tornadomeet/p/3430312.html
最后通过修改a,β,γ及不同的φ 参数来缩放baseline网络,以获得EfficientNet-B1到B7。最后对于如何获取最佳的参数,只有不停的去试,但我们是否有大量的硬件设备和时间呢?这也是十分尴尬的情况。
参考博客:
depthwise的讲解:https://www.jianshu.com/p/38dc74d12fcf?utm_source=oschina-app
dropout和drop connect的区别:https://www.cnblogs.com/tornadomeet/p/3430312.html
对EfficieentNet的评价:https://www.zhihu.com/question/326833457/answer/700322601
网友评论