美文网首页
CNN之——AlexNet

CNN之——AlexNet

作者: VictorHong | 来源:发表于2019-08-14 22:14 被阅读0次

深度卷积神经网络(AlexNet)

AlexNet⾸次证明了学习到的特征可以超越⼿⼯设计的特征

AlexNet与LeNet很相似,但是也有很显著的区别:

  1. 相对于LeNet有更多的层数,包含8层变换,其中5层卷积+2层全连接隐藏层+1输出层(更深)

  2. AlexNet将sigmoid激活函数改成了更加简单的ReLU激活函数(更容易训练)

  3. AlexNet采用丢弃法来控制全连接层的模型复杂度。(优化)

  4. AlexNet引⼊了⼤量的图像增⼴,如翻转、裁剪和颜⾊变化,从而进⼀步扩⼤数据集来缓解过拟合

下图是LeNet与AlexNet模型的对比图

AlexNet的结构

AlexNet

实现稍微简化过的AlexNet

import sys
import os
import d2lzh as d2l
from mxnet import nd,gluon,autograd,init
from mxnet.gluon import nn,loss as gloss, data as gdata
import mxnet as mx

建立模型

net = nn.Sequential()
#使⽤较⼤的11 x 11窗⼝来捕获物体。同时使⽤步幅4来较⼤幅度减⼩输出⾼和宽。输出通道比LeNet大很多
net.add(nn.Conv2D(96,kernel_size=11,strides=4,activation='relu'),
        nn.MaxPool2D(pool_size=3,strides=2),
        #减⼩卷积窗⼝,使⽤填充为2来使得输⼊与输出的⾼和宽⼀致,且增⼤输出通道数
        nn.Conv2D(256,kernel_size=5,padding=2,activation='relu'),
        nn.MaxPool2D(pool_size=3,strides=2),
        
        nn.Conv2D(384,kernel_size=3,padding=1,activation='relu'),
        nn.Conv2D(384,kernel_size=3,padding=1,activation='relu'),
        nn.Conv2D(256,kernel_size=3,padding=1,activation='relu'),
        nn.MaxPool2D(pool_size=3,strides=2),
        
        nn.Dense(4096,activation='relu'),nn.Dropout(0.5),
        nn.Dense(4096,activation='relu'),nn.Dropout(0.5),
        #最后一层小有不同,改为10个输出了,因为训练集比较小
        nn.Dense(10)
        
        )
X = nd.random.uniform(shape=(1,1,224,224))
net.initialize()
for layer in net:
    X = layer(X)
    print(layer.name,"output shape:\t",X.shape)
conv5 output shape:  (1, 96, 54, 54)
pool3 output shape:  (1, 96, 26, 26)
conv6 output shape:  (1, 256, 26, 26)
pool4 output shape:  (1, 256, 12, 12)
conv7 output shape:  (1, 384, 12, 12)
conv8 output shape:  (1, 384, 12, 12)
conv9 output shape:  (1, 256, 12, 12)
pool5 output shape:  (1, 256, 5, 5)
dense3 output shape:     (1, 4096)
dropout2 output shape:   (1, 4096)
dense4 output shape:     (1, 4096)
dropout3 output shape:   (1, 4096)
dense5 output shape:     (1, 10)

读取数据

batch_size = 128
train_iter,test_iter = d2l.load_data_fashion_mnist(batch_size,resize=224)

训练

lr,num_epochs = 0.1,5
ctx = d2l.try_gpu()
net.initialize(force_reinit=True,ctx=ctx,init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(),'sgd',{'learning_rate':lr})
d2l.train_ch5(net,train_iter,test_iter,batch_size,trainer,ctx,num_epochs)
training on gpu(0)
epoch 1, loss 0.8176, train acc 0.699, test acc 0.858, time 181.2 sec
epoch 2, loss 0.3950, train acc 0.854, test acc 0.882, time 176.6 sec
epoch 3, loss 0.3292, train acc 0.878, test acc 0.897, time 177.3 sec
epoch 4, loss 0.2937, train acc 0.890, test acc 0.901, time 180.8 sec
epoch 5, loss 0.2657, train acc 0.900, test acc 0.912, time 176.8 sec

相关文章

网友评论

      本文标题:CNN之——AlexNet

      本文链接:https://www.haomeiwen.com/subject/uyrojctx.html