美文网首页
实战kaggle 图像分类(CIFAR-10)

实战kaggle 图像分类(CIFAR-10)

作者: 小黄不头秃 | 来源:发表于2022-09-18 01:07 被阅读0次

(一)CIFAR-10 数据集简介

CIFAR-10 是由 Hinton 的学生 Alex Krizhevsky 和 Ilya Sutskever 整理的一个用于识别普适物体的小型数据集。一共包含 10 个类别的 RGB 彩色图 片:飞机( airplane )、汽车( automobile )、鸟类( bird )、猫( cat )、鹿( deer )、狗( dog )、蛙类( frog )、马( horse )、船( ship )和卡车( truck )。

图片的尺寸为 32×32 ,数据集中一共有 50000 张训练圄片和 10000 张测试图片。

与 MNIST 数据集中目比, CIFAR-10 具有以下不同点:
• CIFAR-10 是 3 通道的彩色 RGB 图像,而 MNIST 是灰度图像。
• CIFAR-10 的图片尺寸为 32×32, 而 MNIST 的图片尺寸为 28×28,比 MNIST 稍大。
• 相比于手写字符, CIFAR-10 含有的是现实世界中真实的物体,不仅噪声很大,而且物体的比例、 特征都不尽相同,这为识别带来很大困难。 直接的线性模型如 Softmax 在 CIFAR-10 上表现得很差。

前面的一些神经网络都是纸上谈兵,现在是检验的时候咯。

(二)代码实现

(1)从零开始

首先,不看书本的操作,围绕resnet18。自己使用一些小技巧尽量让神经网络的准确率升高。

一开始我把学习率设置为0.01,但是发现神经网络没有要收敛的迹象。然后将学习率调整为2e-4收敛速度就快了很多。首先使用resnet18重新训练,发现准确度并不高。然后使用微调,在原来的基础上,将图像放大然后进行微调。得到了一个不错的结果。

import collections 
import math
import os 
import shutil
import pandas as pd
import torch 
import torchvision
from torch import nn 
from d2l import torch as d2l
import numpy as np
from IPython import display
from matplotlib import pyplot as plt
from matplotlib_inline import backend_inline

def load_cifar_10(path,batch_size):
    print("loading data……")
    trans = torchvision.transforms.ToTensor()
    train_data = torchvision.datasets.CIFAR10(
        root=path,
        train=True,
        transform=trans,
        download=False
    )
    test_data = torchvision.datasets.CIFAR10(
        root=path,
        train=False,
        transform=trans,
        download=False
    )
    return torch.utils.data.DataLoader(train_data,batch_size=batch_size,num_workers=0), torch.utils.data.DataLoader(test_data,batch_size=batch_size,num_workers=0)

# train_iter, test_iter = load_cifar_10("../data/",128)
# print(train_iter.dataset.data.shape)
train_iter, test_iter = load_cifar_10("../data/",128)
net = d2l.resnet18(num_classes=10,in_channels=3)

# 查看网络结构
x,y = next(iter(train_iter))

for layer in net:
    x = layer(x)
    print(layer.__class__.__name__,"output shape:\t",x.shape)

# 简化版评估模型准确率
def evalue_acc(net,data_iter,device=None):
    if isinstance(net, nn.Module):
        net.eval()
        if not device:
            device = next(iter(net.parameters())).device
    # 正确的预测数量,总预测的数量
    acc_list = np.array([])
    with torch.no_grad():
        for X, y in data_iter:
            if isinstance(X, list):
                # Bert微调所需
                X = [x.to(device) for x in X]
            else:
                X = X.to(device)
            y_hat = net(X)
            y = y.to(device)
            if len(y_hat.shape) > 1 and y_hat.shape[1] >1:
                y_hat = y_hat.argmax(axis=1)
            cmp = y_hat.type(y.dtype) == y
            acc = torch.as_tensor(cmp).sum().item()/len(y)
            acc_list = np.append(acc_list,acc)
    return acc_list.mean()
class Animator():
    """初始化横轴,x/ylabel,"""
    def __init__(self, xlim, xlabel=None, ylabel=None, legend=None,ylim=None, xscale='linear', yscale='linear',
                 fmts=('-', 'm--', 'g-.', 'r:'),figsize=(3.5, 2.5)) -> None:
        self.xlabel = xlabel
        self.ylabel = ylabel
        self.xscale = xscale
        self.yscale = yscale
        self.xlim = xlim
        self.ylim = ylim
        self.legend = legend
        # 图例显示
        if legend is None:
            legend = []
        # 使用svg图像显示
        backend_inline.set_matplotlib_formats('svg')
        self.fig, self.axes = plt.subplots(figsize=figsize)
        self.x = None
        self.y = None
        self.fmts = fmts

    def set_axes(self):
        # 设置轴上的属性
        self.axes.set_xlabel(self.xlabel)
        self.axes.set_ylabel(self.ylabel)
        self.axes.set_xscale(self.xscale)
        self.axes.set_yscale(self.yscale)
        self.axes.set_xlim(self.xlim)
        self.axes.set_ylim(self.ylim)
        if self.legend:
            self.axes.legend(self.legend)
        self.axes.grid()


    def show(self,x,y):
        self.axes.cla()
        for i in range(len(x)):
            self.axes.plot(x[i],y[i],self.fmts[i])
        self.set_axes()
        display.display(self.fig)
        display.clear_output(wait=True)
def train(net,train_iter,test_iter,loss,trainer,num_epochs, devices=d2l.try_gpu()):
    index = []
    test_acc_history = []
    train_acc_history = []
    loss_history = []
    legend = ["train_loss","test_acc","train_acc"]
    fig = Animator(xlim=(-0.1,num_epochs+0.1),figsize=(4,3),xlabel="epoch",legend=legend)

    net.cuda(devices)
    print("Data train on :", devices)
    for epoch in range(num_epochs):
        net.train()
        for x, y in train_iter:
            x = x.to(devices)
            y = y.to(devices)
            l = loss(net(x),y)
            trainer.zero_grad()
            l.backward()
            trainer.step()

        train_acc = evalue_acc(net,train_iter,device=devices)
        test_acc = evalue_acc(net,test_iter,device=devices)
        index.append(epoch+1)
        test_acc_history.append(test_acc)
        train_acc_history.append(train_acc)
        loss_history.append(l.sum().item())
        fig.show([index,index,index],[loss_history,test_acc_history,train_acc_history])
    print("train accuracy:",train_acc_history[-1])
    print("test accuracy:",test_acc_history[-1])
# 超参数的设计
epochs=50
batch_size = 128
dataset_path = "../data/"
learning_rate = 2e-3
weight_decay = 5e-4

# 数据和模型
train_iter, test_iter = load_cifar_10(dataset_path,128)
net = d2l.resnet18(num_classes=10,in_channels=3)

loss = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(),lr=learning_rate,weight_decay=weight_decay)

train(net,train_iter,test_iter,loss,trainer=optimizer,devices=d2l.try_gpu(),num_epochs=epochs)
resnet18,test accuracy: 0.778

前面的代码并不能将准确率提高到80%以上,我想试试如果使用finetune进行pre-train呢?通过前面的实验,我发现模型似乎有些过拟合,我在想,能不能将图片放大和数据层强,弱化一些底层卷积核的力度。下面开始我们的实验。

# 前面的代码并不能将准确率提高到80%以上,我想试试如果使用pre-train呢?
normalize = torchvision.transforms.Normalize(
    mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

# 数据形状变化
# 把图片放大到(64,64),水平随机反转
train_augs = torchvision.transforms.Compose([
    torchvision.transforms.Resize((64,64)),
    torchvision.transforms.RandomHorizontalFlip(), # 随机水平翻转
    torchvision.transforms.ToTensor(),
    normalize]
)

test_augs = torchvision.transforms.Compose([
    torchvision.transforms.Resize((64,64)),
    torchvision.transforms.ToTensor(),
    normalize]
)

# 加载数据
def load_cifar_10_trans(path,batch_size):
    trans = torchvision.transforms.ToTensor()
    train_data = torchvision.datasets.CIFAR10(
        root=path,
        train=True,
        transform=train_augs,
        download=False
    )
    test_data = torchvision.datasets.CIFAR10(
        root=path,
        train=False,
        transform=test_augs,
        download=False
    )
    return torch.utils.data.DataLoader(train_data,batch_size=batch_size,num_workers=0), torch.utils.data.DataLoader(test_data,batch_size=batch_size,num_workers=0)

# 超参数的设计
epochs=20
batch_size = 128
dataset_path = "../data/"
learning_rate = 2e-3
weight_decay = 5e-4

# 数据和模型
train_iter, test_iter = load_cifar_10(dataset_path,128)
# 使用pre_train,并加载模型权重
fine_tune_model = torchvision.models.resnet18(pretrained=True)
# 将最后一层换掉
fine_tune_model.fc = nn.Linear(fine_tune_model.fc.in_features, 10)
nn.init.xavier_uniform_(fine_tune_model.fc.weight)

# 损失函数 和 优化器
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam([
    {
        "params":[param for name, param in fine_tune_model.named_parameters() if name not in ["fc.weight", "fc.bias"]],
        "lr":learning_rate/10
    },
    {
        "params":fine_tune_model.fc.parameters(),
        "lr":learning_rate
    },
],lr=learning_rate,weight_decay=weight_decay)

train(net,train_iter,test_iter,loss,trainer=optimizer,devices=d2l.try_gpu(),num_epochs=epochs)
# (64,64)pre_train,learning_rate = 2e-3,test accuracy: 0.894
learning_rate = 2e-3,(64,64),pre_train,test accuracy: 0.894 learning_rate = 2e-3,(128,128),pre_train,test accuracy: 0.9281
(2)书中的方法

书中从原来的数据集中抽取了一部分作为实验。书本提供包含前1000个训练图像和5个随机测试图像的数据集的小规模样本。
下载地址:http://d2l-data.s3-accelerate.amazonaws.com/kaggle_cifar10_tiny.zip

也可以用过代码下载:

import collections
import math
import os
import shutil
import pandas as pd
import torch
import torchvision
from torch import nn
from d2l import torch as d2l
#@save
d2l.DATA_HUB['cifar10_tiny'] = (d2l.DATA_URL + 'kaggle_cifar10_tiny.zip',
                                '2068874e4b9a9f0fb07ebe0ad2b29754449ccacd')

# 如果你使用完整的Kaggle竞赛的数据集,设置demo为False
demo = True

if demo:
    data_dir = d2l.download_extract('cifar10_tiny')
else:
    data_dir = '../data/cifar-10/'

以前几节我们都是调用torchvision里面的高级API来对数据进行张量加载。但是在实际情况中,我们遇到的往往是一个文件夹里面存放了很多图片。所以cifar10_tiny叫我们怎么读取图像文件,并将它们组织起来,最后变成张量的格式。

traintest文件夹分别包含训练和测试图像,trainLabels.csv含有训练图像的标签

#@save
def read_csv_labels(fname):
    """读取fname来给标签字典返回一个文件名"""
    with open(fname, 'r') as f:
        # 跳过文件头行(列名)
        lines = f.readlines()[1:]
    # Python rstrip() 删除 string 字符串末尾的指定字符,默认为空白符,包括空格、换行符、回车符、制表符。
    # "asd,fasdf\n\t".rstrip().split() # 输出['asd,fasdf']
    tokens = [l.rstrip().split(',') for l in lines]
    return dict(((name, label) for name, label in tokens))

data_dir = "../data/kaggle_cifar10_tiny"
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
labels_list = [(name,value) for (name,value) in labels.items()]

print(labels_list[0:5])
print('# 训练样本 :', len(labels))
print('# 类别 :', len(set(labels.values())))
# shutil模块是对os模块的补充,主要针对文件的拷贝、删除、移动、压缩和解压操作。
def copyfile(filename, target_dir):
    """将文件复制到目标目录"""
    os.makedirs(target_dir, exist_ok=True)
    shutil.copy(filename, target_dir)

def reorg_train_valid(data_dir, labels, valid_ratio):
    """将验证集从原始的训练集中拆分出来"""
    # 训练数据集中样本最少的类别中的样本数
    # 除内置数据类型(dict、list、set、tuple)的基础上,collections模块还提供了几个额外的数据类型:Counter、deque、defaultdict、namedtuple和OrderedDict等。
    # >>> Counter('abracadabra').most_common(3)
    # [('a', 5), ('b', 2), ('r', 2)]
    n = collections.Counter(labels.values()).most_common()[-1][1]

    # 验证集中每个类别的样本数,就是说每个分类至少取一个作为valid dataset
    n_valid_per_label = max(1, math.floor(n * valid_ratio))
    label_count = {}
    for train_file in os.listdir(os.path.join(data_dir, 'train')):
        # 获取图片对应的label
        label = labels[train_file.split('.')[0]]
        fname = os.path.join(data_dir, 'train', train_file)
        copyfile(fname, os.path.join(data_dir, 'train_valid_test',
                                     'train_valid', label))
        if label not in label_count or label_count[label] < n_valid_per_label:
            copyfile(fname, os.path.join(data_dir, 'train_valid_test',
                                         'valid', label))
            # 赋初始值为0
            label_count[label] = label_count.get(label, 0) + 1
        else:
            copyfile(fname, os.path.join(data_dir, 'train_valid_test',
                                         'train', label))
    return n_valid_per_label
def reorg_test(data_dir):
    """在预测期间整理测试集,以方便读取"""
    for test_file in os.listdir(os.path.join(data_dir, 'test')):
        copyfile(os.path.join(data_dir, 'test', test_file),
                 os.path.join(data_dir, 'train_valid_test', 'test',
                              'unknown'))
def reorg_cifar10_data(data_dir, valid_ratio):
    labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
    reorg_train_valid(data_dir, labels, valid_ratio)
    reorg_test(data_dir)
# batch_size = 32 if demo else 128
batch_size = 32
valid_ratio = 0.1
reorg_cifar10_data(data_dir, valid_ratio)
# 数据增强
transform_train = torchvision.transforms.Compose([
    # 在高度和宽度上将图像放大到40像素的正方形
    torchvision.transforms.Resize(40),
    # 随机裁剪出一个高度和宽度均为40像素的正方形图像,
    # 生成一个面积为原始图像面积0.64到1倍的小正方形,
    # 然后将其缩放为高度和宽度均为32像素的正方形
    torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
                                                   ratio=(1.0, 1.0)),
    torchvision.transforms.RandomHorizontalFlip(),
    torchvision.transforms.ToTensor(),
    # 标准化图像的每个通道
    torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
                                     [0.2023, 0.1994, 0.2010])])

transform_test = torchvision.transforms.Compose([
    torchvision.transforms.ToTensor(),
    torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
                                     [0.2023, 0.1994, 0.2010])])
train_ds, train_valid_ds = [torchvision.datasets.ImageFolder(os.path.join(data_dir, 'train_valid_test', folder),transform=transform_train) for folder in ['train', 'train_valid']]
valid_ds, test_ds = [torchvision.datasets.ImageFolder(os.path.join(data_dir, 'train_valid_test', folder),transform=transform_test) for folder in ['valid', 'test']]

print("训练和验证样本数总数:",train_valid_ds.__len__())
print("训练样本数:",train_ds.__len__())
print("验证集样本数:",valid_ds.__len__())
print("测试样本数:",test_ds.__len__())

# 加载数据,也可以分开写,可读性会高一些
train_iter, train_valid_iter = [torch.utils.data.DataLoader(dataset, batch_size, shuffle=True, drop_last=True) for dataset in (train_ds, train_valid_ds)]
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=False, drop_last=True)
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=False, drop_last=False)
def get_net():
    num_classes = 10
    net = d2l.resnet18(num_classes, 3)
    return net

loss = nn.CrossEntropyLoss(reduction="none")
# 训练函数和以往没有什么不同,使用了数据并行,利用多GPU进行计算
# 唯一的不同是使用了momentum和lr_scheduler
# momentum是优化函数的一种策略,就是让梯度受前面梯度的影响,可以理解为惯性
# lr_scheduler就是每经过lr_period个周期之后,让lr*lr_period,作用是让底层的学习率大,顶层的学习率小一些
def train_v3(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
          lr_decay):
    trainer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,
                              weight_decay=wd)
    scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay)
    num_batches, timer = len(train_iter), d2l.Timer()
    legend = ['train loss', 'train acc']
    if valid_iter is not None:
        legend.append('valid acc')
    animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
                            legend=legend)
    net = nn.DataParallel(net, device_ids=devices).to(devices[0])
    for epoch in range(num_epochs):
        net.train()
        metric = d2l.Accumulator(3)
        for i, (features, labels) in enumerate(train_iter):
            timer.start()
            l, acc = d2l.train_batch_ch13(net, features, labels,
                                          loss, trainer, devices)
            metric.add(l, acc, labels.shape[0])
            timer.stop()
            if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
                animator.add(epoch + (i + 1) / num_batches,
                             (metric[0] / metric[2], metric[1] / metric[2],
                              None))
        if valid_iter is not None:
            valid_acc = d2l.evaluate_accuracy_gpu(net, valid_iter)
            animator.add(epoch + 1, (None, None, valid_acc))
        scheduler.step()
    measures = (f'train loss {metric[0] / metric[2]:.3f}, '
                f'train acc {metric[1] / metric[2]:.3f}')
    if valid_iter is not None:
        measures += f', valid acc {valid_acc:.3f}'
    print(measures + f'\n{metric[2] * num_epochs / timer.sum():.1f}'
          f' examples/sec on {str(devices)}')
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 20, 2e-4, 5e-4
lr_period, lr_decay, net = 4, 0.9, get_net()
train_v3(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
      lr_decay)
tiny_cifar10,resnet18,valid acc 0.391

在不使用lr_scheduler的情况下,验证机的精度能够达到:0.422,感觉lr_scheduler 没有起到很好的作用。

(3)两种方法结合使用

一开始先用fintune在训练集上跑了一遍,用测试集验证结果惨不忍睹。

超级差

结果发现原因是测试集只有五张图片,样本太少了,所以我们在训练的时候得使用验证集作为预测。重新跑了一遍,效果超级好。这个用小数据集训练出的模型会比上面那个大一点的训练的会更好吗?你们可以去试试。

tiny_cifar10,(128,128), lr=1e-3,valid acc 0.968

finetune + lr_scheduler + Adam + cifar_tiny再来一遍。

tiny_cifar10,(128,128), lr=1e-3,lr_scheduler,valid acc 0.51

做了几次实验,发现结果并不是很好,可以见得,使用lr_scheduler并不一定会使得准确率升高,也有可能降低。

最后也可以测试测试,代码的准确率:

net, preds = get_net(), []
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period,
      lr_decay)

for X, _ in test_iter:
    y_hat = net(X.to(devices[0]))
    preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({'id': sorted_ids, 'label': preds})
df['label'] = df['label'].apply(lambda x: train_valid_ds.classes[x])
df.to_csv('submission.csv', index=False)

相关文章

网友评论

      本文标题:实战kaggle 图像分类(CIFAR-10)

      本文链接:https://www.haomeiwen.com/subject/bgiinrtx.html