美文网首页
Pytorch学习笔记(4) Cifar分类模型实战

Pytorch学习笔记(4) Cifar分类模型实战

作者: 银色尘埃010 | 来源:发表于2020-03-25 16:54 被阅读0次

    训练一个分类器

    我们已经知道了:

    • 如何定义一个模型: torch.nn
    • 如何计算损失: loss = nn.CrossEntropyLoss()
    • 如何更新参数: optimizer = nn.optim.SGD(net.parameters(), lr = 0.0001, momentum=0.9)

    关于数据

    一般来说,当你需要去处理 image、text、audio 或者 video数据的时候,可以通过Python包加载这些数据,转化为numpy类型的数据。然后将numpy类型的数据转化为 torch.***Tensor

    • 对于图像,常用的有:Pillow,OpenCV
    • 对于语音,常用的有:scipy, libosa
    • 对于文本,常用的有:NLTK, SpaCy

    Pytorch提供了处理图像数据的工具包 torchvision,提供了常见数据集(Imagenet, CIFAR10, MNIST)的数据加载函数,以及数据转化函数,封装在 torchvision.datasetstorch.utils.data.DataLoader之中。

    这为我们提供了很大的便利性,同时避免了重复写重复的代码。

    使用CIFAR10数据集,图片数据包含以下类别:‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’。图片尺寸为 (3,32,32)。

    image.png

    训练一个图片分类器

    1、加载并处理CIFAR10数据

    transform = transforms.Compose(
        [transforms.ToTensor(),
         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
    
    trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                            download=True, transform=transform)
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                              shuffle=True, num_workers=2)
    
    testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                           download=True, transform=transform)
    testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                             shuffle=False, num_workers=2)
    
    classes = ('plane', 'car', 'bird', 'cat',
               'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
    
    # 输出
    Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
    Extracting ./data/cifar-10-python.tar.gz to ./data
    Files already downloaded and verified
    

    显示几张图片

    import matplotlib.pyplot as plt
    import numpy as np
    
    # functions to show an image
    
    
    def imshow(img):
        img = img / 2 + 0.5     # unnormalize
        npimg = img.numpy()
        plt.imshow(np.transpose(npimg, (1, 2, 0)))
        plt.show()
    
    
    # get some random training images
    dataiter = iter(trainloader)
    images, labels = dataiter.next()
    
    # show images
    imshow(torchvision.utils.make_grid(images))
    # print labels
    print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
    
    显示
    horse  frog horse  bird
    

    2、定义卷积神经网络

    import torch.nn as nn
    import torch.nn.functional as F
    
    
    class Net(nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.conv1 = nn.Conv2d(3, 6, 5)
            self.pool = nn.MaxPool2d(2, 2)
            self.conv2 = nn.Conv2d(6, 16, 5)
            self.fc1 = nn.Linear(16 * 5 * 5, 120)
            self.fc2 = nn.Linear(120, 84)
            self.fc3 = nn.Linear(84, 10)
    
        def forward(self, x):
            x = self.pool(F.relu(self.conv1(x)))
            x = self.pool(F.relu(self.conv2(x)))
            x = x.view(-1, 16 * 5 * 5)
            x = F.relu(self.fc1(x))
            x = F.relu(self.fc2(x))
            x = self.fc3(x)
            return x
    
    net = Net()
    

    3、定义损失函数和优化器

    import torch.optim as optim
    
    criterion = nn.CrossEntropyLoss() # 多分类器交叉熵损失函数 包含了
    optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # SGD梯度下降
    

    4、训练神经网络

    for epoch in range(2):  # loop over the dataset multiple times
    
        running_loss = 0.0
        for i, data in enumerate(trainloader, 0):
            # get the inputs; data is a list of [inputs, labels]
            inputs, labels = data
    
            # zero the parameter gradients
            optimizer.zero_grad()
    
            # forward + backward + optimize
            outputs = net(inputs)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()
    
            # print statistics
            running_loss += loss.item()
            if i % 2000 == 1999:    # print every 2000 mini-batches
                print('[%d, %5d] loss: %.3f' %
                      (epoch + 1, i + 1, running_loss / 2000))
                running_loss = 0.0
    
    print('Finished Training')
    
    # 输出
    [1,  2000] loss: 2.229
    [1,  4000] loss: 1.877
    [1,  6000] loss: 1.678
    [1,  8000] loss: 1.576
    [1, 10000] loss: 1.502
    [1, 12000] loss: 1.476
    [2,  2000] loss: 1.381
    [2,  4000] loss: 1.370
    [2,  6000] loss: 1.344
    [2,  8000] loss: 1.326
    [2, 10000] loss: 1.324
    [2, 12000] loss: 1.294
    Finished Training
    
    • 快速保存模型
    PATH = './cifar_net.p'
    torch.save(net.state_dict(), PATH)
    

    5、在测试集上测试网络

    让我们看看分类模型是否有效
    输出GroundTruth

    dataiter = iter(testloader)
    images, labels = dataiter.next()
    
    # print images
    imshow(torchvision.utils.make_grid(images))
    print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
    
    image.png
    • 加载模型
    net = Net()
    net.load_state_dict(torch.load(PATH))
    

    模型加载完成,看看模型是否有效

    outputs = net(images)
    
    _, predicted = torch.max(outputs, 1)
    
    print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                                  for j in range(4)))
    

    结果看起来很不错
    看看在整个数据集上表现如何

    correct = 0
    total = 0
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            outputs = net(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    
    print('Accuracy of the network on the 10000 test images: %d %%' % (
        100 * correct / total))
    
    # 输出
    Accuracy of the network on the 10000 test images: 54 %
    

    考虑到我们一个有10类,随记猜测的准确率在10% ,目前54%的准确LV看起来确实很不错,我们的神经网络学到了一些特征。
    我们看看不同的类别的准确率,也可以使用sklearn的混淆矩阵,来评估效果。

    class_correct = list(0. for i in range(10))
    class_total = list(0. for i in range(10))
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            outputs = net(images)
            _, predicted = torch.max(outputs, 1)
            c = (predicted == labels).squeeze()
            for i in range(4):
                label = labels[i]
                class_correct[label] += c[i].item()
                class_total[label] += 1
    
    
    for i in range(10):
        print('Accuracy of %5s : %2d %%' % (
            classes[i], 100 * class_correct[i] / class_total[i]))
    
    # 输出
    Accuracy of plane : 66 %
    Accuracy of   car : 61 %
    Accuracy of  bird : 27 %
    Accuracy of   cat : 24 %
    Accuracy of  deer : 51 %
    Accuracy of   dog : 41 %
    Accuracy of  frog : 70 %
    Accuracy of horse : 65 %
    Accuracy of  ship : 61 %
    Accuracy of truck : 70 %
    

    目前我们已经构造了一个分类器,那么接下来我们需要干什么呢?
    尝试使用GPU进行处理。

    6、在GPU上进行训练

    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    # Assuming that we are on a CUDA machine, this should print a CUDA device:
    print(device)
    

    然后将模型参数和训练数据转化为CUDA tensors

    # 模型参数转化为CUDA Tensor
    net.to(device)
    
    # 将数据转化为CUDA tensors
    inputs, labels = data[0].to(device), data[1].to(device)
    

    相关文章

      网友评论

          本文标题:Pytorch学习笔记(4) Cifar分类模型实战

          本文链接:https://www.haomeiwen.com/subject/wijudhtx.html