Pytorch学习记录-前馈神经网络

作者: 我的昵称违规了 | 来源:发表于2019-04-04 21:57 被阅读4次
新建 Microsoft PowerPoint 演示文稿 (2).jpg

Pytorch学习记录-前馈神经网络
终于到了神经网络部分,其实在前面我试着用GPU跑了逻辑回归,似乎速度提升不多,深度学习部分再试试看。

1. 引入必须库&设定超参数

一样的套路

import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn

# 调用GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# 超参数
input_size = 784
hidden_size = 500
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001

# 加载数据
train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = torchvision.datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)


# 构建模型,这是一个有一个隐藏层的全连接的神经网络
class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)

        return out


model = NeuralNet(input_size, hidden_size, num_classes).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# 训练模型
total_step = len(test_loader)
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        images = images.reshape(-1, 28*28).to(device)
        labels = labels.to(device)

        outputs = model(images)
        loss = criterion(outputs, labels)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if (i + 1) % 100 == 0:
            print('epoch [{}/{}], step [{}/{}] ,Loss: {:.4f}'.format(epoch + 1, num_epochs, i + 1, total_step,
                                                                     loss.item()))

# 测试模型
with torch.no_grad():
    correct = 0
    total = 0
    for images, labels in test_loader:
        images = images.reshape(-1, 28 * 28).to(device)
        labels = labels.to(device)
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))

# Save the model checkpoint
torch.save(model.state_dict(), 'model.ckpt')


# epoch [1/5], step [100/100] ,Loss: 0.3550
# epoch [1/5], step [200/100] ,Loss: 0.2174
# epoch [1/5], step [300/100] ,Loss: 0.1755
# epoch [1/5], step [400/100] ,Loss: 0.1715
# epoch [1/5], step [500/100] ,Loss: 0.1894
# epoch [1/5], step [600/100] ,Loss: 0.1821
# epoch [2/5], step [100/100] ,Loss: 0.1039
# epoch [2/5], step [200/100] ,Loss: 0.1726
# epoch [2/5], step [300/100] ,Loss: 0.0366
# epoch [2/5], step [400/100] ,Loss: 0.0735
# epoch [2/5], step [500/100] ,Loss: 0.0686
# epoch [2/5], step [600/100] ,Loss: 0.1671
# epoch [3/5], step [100/100] ,Loss: 0.0978
# epoch [3/5], step [200/100] ,Loss: 0.0491
# epoch [3/5], step [300/100] ,Loss: 0.0499
# epoch [3/5], step [400/100] ,Loss: 0.0476
# epoch [3/5], step [500/100] ,Loss: 0.0795
# epoch [3/5], step [600/100] ,Loss: 0.0819
# epoch [4/5], step [100/100] ,Loss: 0.0322
# epoch [4/5], step [200/100] ,Loss: 0.0211
# epoch [4/5], step [300/100] ,Loss: 0.1005
# epoch [4/5], step [400/100] ,Loss: 0.0141
# epoch [4/5], step [500/100] ,Loss: 0.0280
# epoch [4/5], step [600/100] ,Loss: 0.0384
# epoch [5/5], step [100/100] ,Loss: 0.0220
# epoch [5/5], step [200/100] ,Loss: 0.0183
# epoch [5/5], step [300/100] ,Loss: 0.0227
# epoch [5/5], step [400/100] ,Loss: 0.0130
# epoch [5/5], step [500/100] ,Loss: 0.0511
# epoch [5/5], step [600/100] ,Loss: 0.0245
# Accuracy of the network on the 10000 test images: 97.76 %

相关文章

网友评论

    本文标题:Pytorch学习记录-前馈神经网络

    本文链接:https://www.haomeiwen.com/subject/urrciqtx.html