The SqueezeNet architecture
Smaller CNNs offer at least three advantages: less computation, less bandwidth and more feasible to deploy on FPGAs. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB.
- Strategy 1. Replace 3x3 filters with 1x1 filters.
- Strategy 2. Decrease the number of input channels to 3x3 filters.
- Strategy 3. Downsample late in the network so that convolution layers have large activation maps.



Experiment

References:
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, 2017,arXiv: Computer Vision and Pattern Recognition
网友评论