美文网首页
卷积神经网络(LeNet)

卷积神经网络(LeNet)

作者: VictorHong | 来源:发表于2019-08-11 02:17 被阅读0次

含单隐藏层的多层感知机模型局限性:

  1. 图像在同⼀列邻近的像素在这个向量中可能相距较远。它们构成的模式可能难以被模型识别。

  2. 对于⼤尺⼨的输⼊图像,使⽤全连接层容易导致模型过⼤。

卷积层尝试解决这两个问题:

  1. 卷积层保留输⼊形状,使图像的像素在⾼和宽两个⽅向上的相关性均可能被有效识别(考虑到两个方向维度)

  2. 卷积层通过滑动窗口将同⼀卷积核与不同位置的输⼊重复计算,从而避免参数尺⼨过⼤(卷积运算减小参数)

卷积神经⽹络就是含卷积层的⽹络。

LeNet 早期用来识别手写体数字的图像的卷积神经网络

LeNet模型

组成部分:

  1. 卷积层块
  2. 全连接层块

下面是LeNet的模型以及相关的操作:

其中包含一个卷积块和一个全连接层块

卷积快包括两组卷积层+池化层的组合,总共4层;而全连接层包含两层全连接层和一个输出层,总共3层

LeNet5的压缩符号

通过Sequential类来实现LeNet模型。

首先导入必要的包

%matplotlib inline
import d2lzh as d2l
from mxnet import autograd,nd,init
from mxnet import gluon
from mxnet.gluon import nn,loss as gloss,data as gdata
import time

新建一个LeNet模型

其中所有卷积层(不含池化层)和全连接层(不含输出层)都使用sigmoid作为激活函数,第一个卷积层的输出通道为6,第二个卷积层的输出通道为16,这是因为第⼆个卷
积层⽐第⼀个卷积层的输⼊的⾼和宽要小,所以增加输出通道使两个卷积层的参数尺⼨类似。两个池化层的窗口大小都为2x2且步幅为2不填充。全连接层按照120,84,10个元素逐层下降。

net = nn.Sequential()
net.add(nn.Conv2D(channels=6,kernel_size=5,activation='sigmoid'),
        nn.MaxPool2D(pool_size=2,strides=2),
        nn.Conv2D(channels=16,kernel_size=5,activation='sigmoid'),
        nn.MaxPool2D(pool_size=2,strides=2),
        nn.Dense(120,activation='sigmoid'),
        nn.Dense(84,activation='sigmoid'),
        nn.Dense(10)
        )
net
Sequential(
  (0): Conv2D(None -> 6, kernel_size=(5, 5), stride=(1, 1), Activation(sigmoid))
  (1): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False, global_pool=False, pool_type=max, layout=NCHW)
  (2): Conv2D(None -> 16, kernel_size=(5, 5), stride=(1, 1), Activation(sigmoid))
  (3): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False, global_pool=False, pool_type=max, layout=NCHW)
  (4): Dense(None -> 120, Activation(sigmoid))
  (5): Dense(None -> 84, Activation(sigmoid))
  (6): Dense(None -> 10, linear)
)

对模型初始化测试

用随机生成的28*28的数据去查看每一层输出的数据

net.initialize()
X = nd.random.uniform(shape=(1,1,28,28))
for layer in net:
    X = layer(X)
    print(layer.name,"output shape:\t",X.shape)
conv0 output shape:  (1, 6, 24, 24)
pool0 output shape:  (1, 6, 12, 12)
conv1 output shape:  (1, 16, 8, 8)
pool1 output shape:  (1, 16, 4, 4)
dense0 output shape:     (1, 120)
dense1 output shape:     (1, 84)
dense2 output shape:     (1, 10)

获取训练数据和训练模型

设置批量大小和载入训练数据和测试数据

batch_size = 256
train_iter,test_iter = d2l.load_data_fashion_mnist(batch_size)

测试能否使用GPU

ctx = d2l.try_gpu()
ctx
cpu(0)

重写评估准确度的函数

def evaluate_accuracy(data_iter,net,ctx):
    acc_sum,n = nd.array([0],ctx=ctx),0
    for X,y in data_iter:
        X,y = X.as_in_context(ctx),y.as_in_context(ctx).astype('float32') #先变量转化再比较
        acc_sum += (net(X).argmax(axis = 1) == y).sum()
        n+=y.size
    return acc_sum.asscalar() / n

编写训练模型函数

def train_ch5(net,train_iter,test_iter,batch_size,trainer,ctx,num):
    print("training on ",ctx)
    loss = gloss.SoftmaxCELoss()
    for epoch in range(num_epochs):
        train_l_sum,train_acc_sum,n,start = 0.0,0.0,0,time.time()
        for X,y in train_iter:
            X,y = X.as_in_context(ctx),y.as_in_context(ctx)
            with autograd.record():
                y_hat = net(X)
                l = loss(y_hat,y).sum()
            l.backward()
            trainer.step(batch_size)
            y = y.astype('float32')
            train_l_sum += l.asscalar()
            train_acc_sum += (y_hat.argmax(axis=1) == y).sum().asscalar()
            n += y.size
        test_acc = evaluate_accuracy(test_iter,net,ctx)
        print('epoch %d, loss %.4f, train_acc %.3f, test acc %.3f, time %.1f sec'
                 % (epoch+1,train_l_sum/n,train_acc_sum/n,test_acc,time.time()-start))
lr,num_epochs = 0.9,5
net.initialize(force_reinit=True,ctx=ctx,init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(),'sgd',{'learning_rate':lr})

使用CPU训练

train_ch5(net,train_iter,test_iter,batch_size,trainer,ctx,num_epochs)
training on  cpu(0)
epoch 1, loss 2.3197, train_acc 0.099, test acc 0.100, time 27.9 sec
epoch 2, loss 1.9653, train_acc 0.243, test acc 0.575, time 27.2 sec
epoch 3, loss 0.9685, train_acc 0.616, test acc 0.704, time 27.2 sec
epoch 4, loss 0.7596, train_acc 0.706, test acc 0.710, time 27.2 sec
epoch 5, loss 0.6729, train_acc 0.734, test acc 0.744, time 27.2 sec

展示

def show():
    for X,y in test_iter:
        break
    true_labels = d2l.get_fashion_mnist_labels(y.asnumpy())
    pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(axis=1).asnumpy())
    titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)]

    d2l.show_fashion_mnist(X[0:9], titles[0:9])
show()

相关文章

网友评论

      本文标题:卷积神经网络(LeNet)

      本文链接:https://www.haomeiwen.com/subject/bcqjjctx.html