美文网首页计算机图像
CS231n Spring 2019 Assignment 2—

CS231n Spring 2019 Assignment 2—

作者: 赖子啊 | 来源:发表于2019-10-09 09:35 被阅读0次

    这是作业2的最后一次,是学习一个现在主流的框架,因为在这些框架里面,可以让tensor运行在GPU上,加速我们的训练。我选择了PyTorch,因为PyTorch比较适合研究,动态图机制,代码较Tensorflow更加简洁易懂(不过不知道2.0出来以后的改变大不大),有像numpy的编程风格,这次使用的版本是1.0的。详细的API文档PyTorch forum证明PyTorch生态环境正在越来越好(个人感觉API文档写得比Tensorflow详细)。
    在这次的PyTorch.ipynb里面,就是学习三个层次的构建模型并训练,从原初的低级API到后来的集成度很高的高级API,并且用这三种方式分别都构建了一个2层的全连接网络和一个3层的卷积神经网络,并且训练它们,以此突出对比,下面就是三种API的对比:

    API Flexibility Convenience
    Barebone High Low
    nn.Module High Medium
    nn.Sequential Low High

    Barebones PyTorch

    这一节没有多少要写的,它给出了两层的全连接的做示范,要你写三层的卷积网络的搭建和训练部分:直接上代码吧:
    three_layer_convnet部分,主要参看函数torch.nn.functional.conv2d

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    x = F.conv2d(x, conv_w1, bias=conv_b1, padding=2, stride=1)
    x = F.relu(x)
    x = F.conv2d(x, conv_w2, bias=conv_b2, padding=1, stride=1)
    x = F.relu(x)
    x = flatten(x)
    scores = x.mm(fc_w) + fc_b
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    

    Training a ConvNet部分:主要运用上面的random_weight和zero_weight:

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    conv_w1 = random_weight((channel_1, 3, 5, 5))
    conv_b1 = zero_weight(channel_1)
    conv_w2 = random_weight((channel_2, channel_1, 3, 3))
    conv_b2 = zero_weight(channel_2)
    fc_w = random_weight((channel_2 * 32 * 32, 10))
    fc_b = zero_weight(10)
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    

    PyTorch Module API

    这一部分就是继承稍微高级的nn.Module类,来定义一个自己的网络结构,用这类方法灵活性高,主要是完成类属性init()和forward()的定义。这里还是写三层的卷积网络的搭建和训练部分,只不过换种方式:
    ThreeLayerConvNet部分:主要参考nn.Conv2d()

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    """torch.nn.Conv2d(in_channels, out_channels, 
                    kernel_size, stride=1, padding=0, 
                    dilation=1, groups=1, bias=True, padding_mode='zeros')
    """
    
    self.conv1 = nn.Conv2d(in_channel, channel_1, 5, padding=2)
    nn.init.kaiming_normal_(self.conv1.weight)
    self.conv2 = nn.Conv2d(channel_1, channel_2, 3, padding=1)
    nn.init.kaiming_normal_(self.conv2.weight)
    self.fc = nn.Linear(channel_2 * 32 * 32, num_classes)
    nn.init.kaiming_normal_(self.fc.weight)
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    x = F.relu(self.conv1(x))
    x = F.relu(self.conv2(x))
    x = flatten(x)
    scores = self.fc(x)
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    

    Train a Three-Layer ConvNet部分,定义model和optimizer

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    model = ThreeLayerConvNet(in_channel=3, channel_1=channel_1, channel_2=channel_2, num_classes=10)
    optimizer = optim.SGD(model.parameters(), lr=learning_rate)
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    

    PyTorch Sequential API

    这是更高级的一个API:nn.Sequential,但是灵活性会差点,但是不用写forward()部分了,会自动完成的,只要搭建一个架构就行了,经实验也不需要自己写初始化权重的部分:

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    """torch.nn.Conv2d(in_channels, out_channels, 
                            kernel_size, stride=1, padding=0, 
                            dilation=1, groups=1, bias=True, padding_mode='zeros')
    """
    
    model = nn.Sequential(
        nn.Conv2d(3, channel_1, 5, padding=2),
        nn.ReLU(),
        nn.Conv2d(channel_1, channel_2, 3, padding=1),
        nn.ReLU(),
        Flatten(),
        nn.Linear(channel_2*32*32, 10),
    )
    
    optimizer = optim.SGD(model.parameters(), lr=learning_rate,
                         momentum=0.9, nesterov=True)
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    

    CIFAR-10 open-ended challenge

    最后是一个开放式挑战,就是自己搭建网络模型,选择优化器,来在cifar10上训练,至少达到验证集上70%的准确率,自己训练了几次以后发现达不到精度,后来就借鉴了AlexNet的模型,并且根据自己的显卡情况(我自己笔记本电脑跑的,显卡为只有2G显存的GTX930M),扔掉了一些层,使之能刚好在我笔记本电脑上训练,最后也达到了要求:

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    learning_rate = 1e-3
    
    class Net(nn.Module):
    
        def __init__(self, num_classes=10):
            super(Net, self).__init__()
            self.features = nn.Sequential(
                nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1), #(64,32,32,32)
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=2, stride=1, padding=1), #(64,32,32,32)
                
                nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=0), #(64,64,28,28)
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=2, stride=1, padding=1), #(64,64,28,28)
                
                nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),  #(64,128,28,28)
                nn.ReLU(inplace=True),
                
                nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), #(64,256,28,28)
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=2, stride=2), #(64,256,14,14)
            )
            self.avgpool = nn.AdaptiveAvgPool2d((7, 7)) #(64,256,7,7)
            self.classifier = nn.Sequential(
                nn.Dropout(),
                nn.Linear(256 * 7 * 7, num_classes),
            )
    
        def forward(self, x):
            x = self.features(x)
            x = self.avgpool(x)
            x = x.view(x.size(0), 256 * 7 * 7)
            x = self.classifier(x)
            return x
    
    model = Net()
    optimizer = optim.Adam(model.parameters(), lr=learning_rate)
    
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    

    结果

    对于最后的那个开放式挑战,经过10个epoch训练,我在验证集上达到了78.30%的准确率,在测试集上达到了78.06%的准确率!
    具体可见PyTorch.ipynb

    链接

    前后面的作业博文请见:

    相关文章

      网友评论

        本文标题:CS231n Spring 2019 Assignment 2—

        本文链接:https://www.haomeiwen.com/subject/kiewpctx.html